00:00:00.000 Started by upstream project "autotest-nightly" build number 4274 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3637 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.109 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.110 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.157 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.204 Using shallow fetch with depth 1 00:00:00.204 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.204 > git --version # timeout=10 00:00:00.246 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.275 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.275 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.978 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.991 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.002 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.002 > git config core.sparsecheckout # timeout=10 00:00:06.014 > git read-tree -mu HEAD # timeout=10 00:00:06.030 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.048 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.048 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.136 [Pipeline] Start of Pipeline 00:00:06.148 [Pipeline] library 00:00:06.149 Loading library shm_lib@master 00:00:06.149 Library shm_lib@master is cached. Copying from home. 00:00:06.164 [Pipeline] node 00:00:06.179 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:06.180 [Pipeline] { 00:00:06.189 [Pipeline] catchError 00:00:06.190 [Pipeline] { 00:00:06.202 [Pipeline] wrap 00:00:06.208 [Pipeline] { 00:00:06.214 [Pipeline] stage 00:00:06.216 [Pipeline] { (Prologue) 00:00:06.230 [Pipeline] echo 00:00:06.231 Node: VM-host-WFP7 00:00:06.237 [Pipeline] cleanWs 00:00:06.249 [WS-CLEANUP] Deleting project workspace... 00:00:06.249 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.256 [WS-CLEANUP] done 00:00:06.427 [Pipeline] setCustomBuildProperty 00:00:06.485 [Pipeline] httpRequest 00:00:06.834 [Pipeline] echo 00:00:06.836 Sorcerer 10.211.164.20 is alive 00:00:06.844 [Pipeline] retry 00:00:06.846 [Pipeline] { 00:00:06.856 [Pipeline] httpRequest 00:00:06.860 HttpMethod: GET 00:00:06.860 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.861 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.862 Response Code: HTTP/1.1 200 OK 00:00:06.862 Success: Status code 200 is in the accepted range: 200,404 00:00:06.863 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.700 [Pipeline] } 00:00:07.714 [Pipeline] // retry 00:00:07.720 [Pipeline] sh 00:00:08.005 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.019 [Pipeline] httpRequest 00:00:08.494 [Pipeline] echo 00:00:08.496 Sorcerer 10.211.164.20 is alive 00:00:08.504 [Pipeline] retry 00:00:08.506 [Pipeline] { 00:00:08.519 [Pipeline] httpRequest 00:00:08.524 HttpMethod: GET 00:00:08.525 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:08.525 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:08.534 Response Code: HTTP/1.1 200 OK 00:00:08.535 Success: Status code 200 is in the accepted range: 200,404 00:00:08.536 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:27.780 [Pipeline] } 00:01:27.799 [Pipeline] // retry 00:01:27.807 [Pipeline] sh 00:01:28.092 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:30.653 [Pipeline] sh 00:01:30.939 + git -C spdk log --oneline -n5 00:01:30.939 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:30.939 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:30.939 4bcab9fb9 correct kick for CQ full case 00:01:30.939 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:30.939 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:30.979 [Pipeline] writeFile 00:01:31.005 [Pipeline] sh 00:01:31.284 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.297 [Pipeline] sh 00:01:31.583 + cat autorun-spdk.conf 00:01:31.583 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.583 SPDK_RUN_ASAN=1 00:01:31.583 SPDK_RUN_UBSAN=1 00:01:31.583 SPDK_TEST_RAID=1 00:01:31.583 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.590 RUN_NIGHTLY=1 00:01:31.592 [Pipeline] } 00:01:31.604 [Pipeline] // stage 00:01:31.618 [Pipeline] stage 00:01:31.620 [Pipeline] { (Run VM) 00:01:31.632 [Pipeline] sh 00:01:31.916 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.916 + echo 'Start stage prepare_nvme.sh' 00:01:31.916 Start stage prepare_nvme.sh 00:01:31.916 + [[ -n 7 ]] 00:01:31.916 + disk_prefix=ex7 00:01:31.916 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:31.916 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:31.916 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:31.916 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.916 ++ SPDK_RUN_ASAN=1 00:01:31.916 ++ SPDK_RUN_UBSAN=1 00:01:31.916 ++ SPDK_TEST_RAID=1 00:01:31.916 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.916 ++ RUN_NIGHTLY=1 00:01:31.916 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:31.916 + nvme_files=() 00:01:31.916 + declare -A nvme_files 00:01:31.916 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.916 + nvme_files['nvme.img']=5G 00:01:31.916 + nvme_files['nvme-cmb.img']=5G 00:01:31.916 + nvme_files['nvme-multi0.img']=4G 00:01:31.916 + nvme_files['nvme-multi1.img']=4G 00:01:31.916 + nvme_files['nvme-multi2.img']=4G 00:01:31.916 + nvme_files['nvme-openstack.img']=8G 00:01:31.916 + nvme_files['nvme-zns.img']=5G 00:01:31.916 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.916 + (( SPDK_TEST_FTL == 1 )) 00:01:31.916 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.916 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.916 + for nvme in "${!nvme_files[@]}" 00:01:31.917 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:31.917 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.917 + for nvme in "${!nvme_files[@]}" 00:01:31.917 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:31.917 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.917 + for nvme in "${!nvme_files[@]}" 00:01:31.917 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:31.917 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.917 + for nvme in "${!nvme_files[@]}" 00:01:31.917 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:31.917 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.917 + for nvme in "${!nvme_files[@]}" 00:01:31.917 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:31.917 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.917 + for nvme in "${!nvme_files[@]}" 00:01:31.917 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:31.917 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.917 + for nvme in "${!nvme_files[@]}" 00:01:31.917 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:32.177 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.177 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:32.177 + echo 'End stage prepare_nvme.sh' 00:01:32.177 End stage prepare_nvme.sh 00:01:32.190 [Pipeline] sh 00:01:32.475 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.475 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:32.475 00:01:32.475 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:32.475 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:32.475 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:32.475 HELP=0 00:01:32.475 DRY_RUN=0 00:01:32.475 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:32.475 NVME_DISKS_TYPE=nvme,nvme, 00:01:32.475 NVME_AUTO_CREATE=0 00:01:32.475 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:32.475 NVME_CMB=,, 00:01:32.475 NVME_PMR=,, 00:01:32.475 NVME_ZNS=,, 00:01:32.475 NVME_MS=,, 00:01:32.475 NVME_FDP=,, 00:01:32.475 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.475 SPDK_VAGRANT_VMCPU=10 00:01:32.475 SPDK_VAGRANT_VMRAM=12288 00:01:32.475 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.475 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.475 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.475 SPDK_OPENSTACK_NETWORK=0 00:01:32.475 VAGRANT_PACKAGE_BOX=0 00:01:32.475 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.475 FORCE_DISTRO=true 00:01:32.475 VAGRANT_BOX_VERSION= 00:01:32.475 EXTRA_VAGRANTFILES= 00:01:32.475 NIC_MODEL=virtio 00:01:32.475 00:01:32.475 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:32.475 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:35.081 Bringing machine 'default' up with 'libvirt' provider... 00:01:35.081 ==> default: Creating image (snapshot of base box volume). 00:01:35.341 ==> default: Creating domain with the following settings... 00:01:35.341 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731806503_7be7314479161ae630cb 00:01:35.341 ==> default: -- Domain type: kvm 00:01:35.341 ==> default: -- Cpus: 10 00:01:35.341 ==> default: -- Feature: acpi 00:01:35.341 ==> default: -- Feature: apic 00:01:35.341 ==> default: -- Feature: pae 00:01:35.341 ==> default: -- Memory: 12288M 00:01:35.341 ==> default: -- Memory Backing: hugepages: 00:01:35.341 ==> default: -- Management MAC: 00:01:35.341 ==> default: -- Loader: 00:01:35.341 ==> default: -- Nvram: 00:01:35.341 ==> default: -- Base box: spdk/fedora39 00:01:35.341 ==> default: -- Storage pool: default 00:01:35.341 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731806503_7be7314479161ae630cb.img (20G) 00:01:35.341 ==> default: -- Volume Cache: default 00:01:35.341 ==> default: -- Kernel: 00:01:35.341 ==> default: -- Initrd: 00:01:35.341 ==> default: -- Graphics Type: vnc 00:01:35.341 ==> default: -- Graphics Port: -1 00:01:35.341 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.341 ==> default: -- Graphics Password: Not defined 00:01:35.341 ==> default: -- Video Type: cirrus 00:01:35.341 ==> default: -- Video VRAM: 9216 00:01:35.341 ==> default: -- Sound Type: 00:01:35.341 ==> default: -- Keymap: en-us 00:01:35.341 ==> default: -- TPM Path: 00:01:35.341 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.341 ==> default: -- Command line args: 00:01:35.341 ==> default: -> value=-device, 00:01:35.341 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.341 ==> default: -> value=-drive, 00:01:35.341 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:35.341 ==> default: -> value=-device, 00:01:35.341 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.341 ==> default: -> value=-device, 00:01:35.341 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:35.341 ==> default: -> value=-drive, 00:01:35.341 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:35.341 ==> default: -> value=-device, 00:01:35.341 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.341 ==> default: -> value=-drive, 00:01:35.341 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:35.341 ==> default: -> value=-device, 00:01:35.341 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.341 ==> default: -> value=-drive, 00:01:35.341 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:35.341 ==> default: -> value=-device, 00:01:35.341 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.601 ==> default: Creating shared folders metadata... 00:01:35.601 ==> default: Starting domain. 00:01:36.543 ==> default: Waiting for domain to get an IP address... 00:01:54.675 ==> default: Waiting for SSH to become available... 00:01:54.675 ==> default: Configuring and enabling network interfaces... 00:01:59.993 default: SSH address: 192.168.121.166:22 00:01:59.993 default: SSH username: vagrant 00:01:59.993 default: SSH auth method: private key 00:02:02.541 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.666 ==> default: Mounting SSHFS shared folder... 00:02:13.226 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:13.226 ==> default: Checking Mount.. 00:02:14.631 ==> default: Folder Successfully Mounted! 00:02:14.631 ==> default: Running provisioner: file... 00:02:16.014 default: ~/.gitconfig => .gitconfig 00:02:16.274 00:02:16.274 SUCCESS! 00:02:16.274 00:02:16.274 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:16.274 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:16.274 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:16.274 00:02:16.284 [Pipeline] } 00:02:16.301 [Pipeline] // stage 00:02:16.311 [Pipeline] dir 00:02:16.311 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:16.313 [Pipeline] { 00:02:16.327 [Pipeline] catchError 00:02:16.329 [Pipeline] { 00:02:16.344 [Pipeline] sh 00:02:16.630 + vagrant ssh-config --host vagrant 00:02:16.630 + sed -ne /^Host/,$p 00:02:16.630 + tee ssh_conf 00:02:19.920 Host vagrant 00:02:19.920 HostName 192.168.121.166 00:02:19.920 User vagrant 00:02:19.920 Port 22 00:02:19.920 UserKnownHostsFile /dev/null 00:02:19.920 StrictHostKeyChecking no 00:02:19.920 PasswordAuthentication no 00:02:19.920 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:19.920 IdentitiesOnly yes 00:02:19.920 LogLevel FATAL 00:02:19.920 ForwardAgent yes 00:02:19.920 ForwardX11 yes 00:02:19.920 00:02:19.933 [Pipeline] withEnv 00:02:19.935 [Pipeline] { 00:02:19.950 [Pipeline] sh 00:02:20.234 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:20.234 source /etc/os-release 00:02:20.234 [[ -e /image.version ]] && img=$(< /image.version) 00:02:20.234 # Minimal, systemd-like check. 00:02:20.234 if [[ -e /.dockerenv ]]; then 00:02:20.234 # Clear garbage from the node's name: 00:02:20.234 # agt-er_autotest_547-896 -> autotest_547-896 00:02:20.234 # $HOSTNAME is the actual container id 00:02:20.234 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:20.234 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:20.234 # We can assume this is a mount from a host where container is running, 00:02:20.234 # so fetch its hostname to easily identify the target swarm worker. 00:02:20.234 container="$(< /etc/hostname) ($agent)" 00:02:20.234 else 00:02:20.234 # Fallback 00:02:20.234 container=$agent 00:02:20.234 fi 00:02:20.234 fi 00:02:20.234 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:20.234 00:02:20.506 [Pipeline] } 00:02:20.525 [Pipeline] // withEnv 00:02:20.535 [Pipeline] setCustomBuildProperty 00:02:20.552 [Pipeline] stage 00:02:20.555 [Pipeline] { (Tests) 00:02:20.576 [Pipeline] sh 00:02:20.858 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:21.135 [Pipeline] sh 00:02:21.418 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:21.692 [Pipeline] timeout 00:02:21.692 Timeout set to expire in 1 hr 30 min 00:02:21.694 [Pipeline] { 00:02:21.707 [Pipeline] sh 00:02:21.988 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:22.558 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:22.570 [Pipeline] sh 00:02:22.923 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:23.197 [Pipeline] sh 00:02:23.480 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:23.758 [Pipeline] sh 00:02:24.041 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:24.301 ++ readlink -f spdk_repo 00:02:24.301 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:24.301 + [[ -n /home/vagrant/spdk_repo ]] 00:02:24.301 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:24.301 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:24.301 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:24.301 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:24.301 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:24.301 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:24.301 + cd /home/vagrant/spdk_repo 00:02:24.301 + source /etc/os-release 00:02:24.301 ++ NAME='Fedora Linux' 00:02:24.301 ++ VERSION='39 (Cloud Edition)' 00:02:24.301 ++ ID=fedora 00:02:24.301 ++ VERSION_ID=39 00:02:24.301 ++ VERSION_CODENAME= 00:02:24.301 ++ PLATFORM_ID=platform:f39 00:02:24.301 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:24.301 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:24.301 ++ LOGO=fedora-logo-icon 00:02:24.301 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:24.301 ++ HOME_URL=https://fedoraproject.org/ 00:02:24.301 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:24.301 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:24.301 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:24.301 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:24.301 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:24.301 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:24.301 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:24.301 ++ SUPPORT_END=2024-11-12 00:02:24.301 ++ VARIANT='Cloud Edition' 00:02:24.301 ++ VARIANT_ID=cloud 00:02:24.301 + uname -a 00:02:24.301 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:24.301 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:24.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:24.871 Hugepages 00:02:24.871 node hugesize free / total 00:02:24.871 node0 1048576kB 0 / 0 00:02:24.871 node0 2048kB 0 / 0 00:02:24.871 00:02:24.871 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:24.871 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:24.871 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:24.871 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:24.871 + rm -f /tmp/spdk-ld-path 00:02:24.871 + source autorun-spdk.conf 00:02:24.871 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.871 ++ SPDK_RUN_ASAN=1 00:02:24.871 ++ SPDK_RUN_UBSAN=1 00:02:24.871 ++ SPDK_TEST_RAID=1 00:02:24.871 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.871 ++ RUN_NIGHTLY=1 00:02:24.871 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:24.871 + [[ -n '' ]] 00:02:24.871 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:24.871 + for M in /var/spdk/build-*-manifest.txt 00:02:24.871 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:24.871 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.871 + for M in /var/spdk/build-*-manifest.txt 00:02:24.871 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:24.871 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.871 + for M in /var/spdk/build-*-manifest.txt 00:02:24.871 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:24.871 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.130 ++ uname 00:02:25.130 + [[ Linux == \L\i\n\u\x ]] 00:02:25.130 + sudo dmesg -T 00:02:25.130 + sudo dmesg --clear 00:02:25.130 + dmesg_pid=5414 00:02:25.130 + sudo dmesg -Tw 00:02:25.130 + [[ Fedora Linux == FreeBSD ]] 00:02:25.130 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.130 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.130 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:25.130 + [[ -x /usr/src/fio-static/fio ]] 00:02:25.130 + export FIO_BIN=/usr/src/fio-static/fio 00:02:25.130 + FIO_BIN=/usr/src/fio-static/fio 00:02:25.130 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:25.130 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:25.130 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:25.130 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.130 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.130 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:25.130 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.130 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.130 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.130 01:22:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:25.130 01:22:33 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.130 01:22:33 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.130 01:22:33 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:25.130 01:22:33 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:25.130 01:22:33 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:25.130 01:22:33 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.130 01:22:33 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:02:25.130 01:22:33 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:25.130 01:22:33 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.130 01:22:33 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:25.130 01:22:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:25.130 01:22:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:25.130 01:22:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:25.130 01:22:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.130 01:22:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.130 01:22:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.130 01:22:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.130 01:22:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.130 01:22:33 -- paths/export.sh@5 -- $ export PATH 00:02:25.130 01:22:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.130 01:22:33 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:25.130 01:22:33 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:25.130 01:22:33 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731806553.XXXXXX 00:02:25.130 01:22:33 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731806553.ll2QoG 00:02:25.130 01:22:33 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:25.130 01:22:33 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:25.130 01:22:33 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:25.130 01:22:33 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:25.130 01:22:33 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:25.130 01:22:33 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:25.130 01:22:33 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:25.130 01:22:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.389 01:22:33 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:25.389 01:22:33 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:25.389 01:22:33 -- pm/common@17 -- $ local monitor 00:02:25.389 01:22:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.389 01:22:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.389 01:22:33 -- pm/common@25 -- $ sleep 1 00:02:25.389 01:22:33 -- pm/common@21 -- $ date +%s 00:02:25.389 01:22:33 -- pm/common@21 -- $ date +%s 00:02:25.389 01:22:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731806553 00:02:25.389 01:22:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731806553 00:02:25.389 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731806553_collect-vmstat.pm.log 00:02:25.389 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731806553_collect-cpu-load.pm.log 00:02:26.327 01:22:34 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:26.327 01:22:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:26.327 01:22:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:26.327 01:22:34 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:26.327 01:22:34 -- spdk/autobuild.sh@16 -- $ date -u 00:02:26.327 Sun Nov 17 01:22:34 AM UTC 2024 00:02:26.327 01:22:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:26.327 v25.01-pre-189-g83e8405e4 00:02:26.327 01:22:34 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:26.327 01:22:34 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:26.327 01:22:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:26.327 01:22:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:26.327 01:22:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.327 ************************************ 00:02:26.327 START TEST asan 00:02:26.327 ************************************ 00:02:26.327 using asan 00:02:26.327 01:22:34 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:26.327 00:02:26.327 real 0m0.000s 00:02:26.327 user 0m0.000s 00:02:26.327 sys 0m0.000s 00:02:26.327 01:22:34 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:26.327 01:22:34 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:26.327 ************************************ 00:02:26.327 END TEST asan 00:02:26.327 ************************************ 00:02:26.327 01:22:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:26.327 01:22:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:26.327 01:22:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:26.327 01:22:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:26.327 01:22:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.327 ************************************ 00:02:26.327 START TEST ubsan 00:02:26.327 ************************************ 00:02:26.327 using ubsan 00:02:26.327 01:22:34 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:26.327 00:02:26.327 real 0m0.000s 00:02:26.327 user 0m0.000s 00:02:26.327 sys 0m0.000s 00:02:26.327 01:22:34 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:26.327 01:22:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:26.327 ************************************ 00:02:26.327 END TEST ubsan 00:02:26.327 ************************************ 00:02:26.327 01:22:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:26.327 01:22:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:26.327 01:22:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:26.327 01:22:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:26.327 01:22:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:26.327 01:22:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:26.327 01:22:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:26.327 01:22:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:26.327 01:22:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:26.587 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:26.587 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:27.155 Using 'verbs' RDMA provider 00:02:43.010 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:57.898 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:58.728 Creating mk/config.mk...done. 00:02:58.728 Creating mk/cc.flags.mk...done. 00:02:58.728 Type 'make' to build. 00:02:58.728 01:23:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:58.728 01:23:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:58.728 01:23:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:58.728 01:23:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.728 ************************************ 00:02:58.728 START TEST make 00:02:58.728 ************************************ 00:02:58.728 01:23:06 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:58.988 make[1]: Nothing to be done for 'all'. 00:03:11.209 The Meson build system 00:03:11.209 Version: 1.5.0 00:03:11.209 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:11.209 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:11.209 Build type: native build 00:03:11.209 Program cat found: YES (/usr/bin/cat) 00:03:11.209 Project name: DPDK 00:03:11.209 Project version: 24.03.0 00:03:11.209 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:11.209 C linker for the host machine: cc ld.bfd 2.40-14 00:03:11.209 Host machine cpu family: x86_64 00:03:11.209 Host machine cpu: x86_64 00:03:11.209 Message: ## Building in Developer Mode ## 00:03:11.209 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:11.209 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:11.209 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:11.209 Program python3 found: YES (/usr/bin/python3) 00:03:11.209 Program cat found: YES (/usr/bin/cat) 00:03:11.209 Compiler for C supports arguments -march=native: YES 00:03:11.209 Checking for size of "void *" : 8 00:03:11.209 Checking for size of "void *" : 8 (cached) 00:03:11.209 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:11.209 Library m found: YES 00:03:11.209 Library numa found: YES 00:03:11.209 Has header "numaif.h" : YES 00:03:11.209 Library fdt found: NO 00:03:11.209 Library execinfo found: NO 00:03:11.210 Has header "execinfo.h" : YES 00:03:11.210 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:11.210 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:11.210 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:11.210 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:11.210 Run-time dependency openssl found: YES 3.1.1 00:03:11.210 Run-time dependency libpcap found: YES 1.10.4 00:03:11.210 Has header "pcap.h" with dependency libpcap: YES 00:03:11.210 Compiler for C supports arguments -Wcast-qual: YES 00:03:11.210 Compiler for C supports arguments -Wdeprecated: YES 00:03:11.210 Compiler for C supports arguments -Wformat: YES 00:03:11.210 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:11.210 Compiler for C supports arguments -Wformat-security: NO 00:03:11.210 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.210 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:11.210 Compiler for C supports arguments -Wnested-externs: YES 00:03:11.210 Compiler for C supports arguments -Wold-style-definition: YES 00:03:11.210 Compiler for C supports arguments -Wpointer-arith: YES 00:03:11.210 Compiler for C supports arguments -Wsign-compare: YES 00:03:11.210 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:11.210 Compiler for C supports arguments -Wundef: YES 00:03:11.210 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.210 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:11.210 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:11.210 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.210 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:11.210 Program objdump found: YES (/usr/bin/objdump) 00:03:11.210 Compiler for C supports arguments -mavx512f: YES 00:03:11.210 Checking if "AVX512 checking" compiles: YES 00:03:11.210 Fetching value of define "__SSE4_2__" : 1 00:03:11.210 Fetching value of define "__AES__" : 1 00:03:11.210 Fetching value of define "__AVX__" : 1 00:03:11.210 Fetching value of define "__AVX2__" : 1 00:03:11.210 Fetching value of define "__AVX512BW__" : 1 00:03:11.210 Fetching value of define "__AVX512CD__" : 1 00:03:11.210 Fetching value of define "__AVX512DQ__" : 1 00:03:11.210 Fetching value of define "__AVX512F__" : 1 00:03:11.210 Fetching value of define "__AVX512VL__" : 1 00:03:11.210 Fetching value of define "__PCLMUL__" : 1 00:03:11.210 Fetching value of define "__RDRND__" : 1 00:03:11.210 Fetching value of define "__RDSEED__" : 1 00:03:11.210 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:11.210 Fetching value of define "__znver1__" : (undefined) 00:03:11.210 Fetching value of define "__znver2__" : (undefined) 00:03:11.210 Fetching value of define "__znver3__" : (undefined) 00:03:11.210 Fetching value of define "__znver4__" : (undefined) 00:03:11.210 Library asan found: YES 00:03:11.210 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:11.210 Message: lib/log: Defining dependency "log" 00:03:11.210 Message: lib/kvargs: Defining dependency "kvargs" 00:03:11.210 Message: lib/telemetry: Defining dependency "telemetry" 00:03:11.210 Library rt found: YES 00:03:11.210 Checking for function "getentropy" : NO 00:03:11.210 Message: lib/eal: Defining dependency "eal" 00:03:11.210 Message: lib/ring: Defining dependency "ring" 00:03:11.210 Message: lib/rcu: Defining dependency "rcu" 00:03:11.210 Message: lib/mempool: Defining dependency "mempool" 00:03:11.210 Message: lib/mbuf: Defining dependency "mbuf" 00:03:11.210 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:11.210 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:11.210 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:11.210 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:11.210 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:11.210 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:11.210 Compiler for C supports arguments -mpclmul: YES 00:03:11.210 Compiler for C supports arguments -maes: YES 00:03:11.210 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:11.210 Compiler for C supports arguments -mavx512bw: YES 00:03:11.210 Compiler for C supports arguments -mavx512dq: YES 00:03:11.210 Compiler for C supports arguments -mavx512vl: YES 00:03:11.210 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:11.210 Compiler for C supports arguments -mavx2: YES 00:03:11.210 Compiler for C supports arguments -mavx: YES 00:03:11.210 Message: lib/net: Defining dependency "net" 00:03:11.210 Message: lib/meter: Defining dependency "meter" 00:03:11.210 Message: lib/ethdev: Defining dependency "ethdev" 00:03:11.210 Message: lib/pci: Defining dependency "pci" 00:03:11.210 Message: lib/cmdline: Defining dependency "cmdline" 00:03:11.210 Message: lib/hash: Defining dependency "hash" 00:03:11.210 Message: lib/timer: Defining dependency "timer" 00:03:11.210 Message: lib/compressdev: Defining dependency "compressdev" 00:03:11.210 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:11.210 Message: lib/dmadev: Defining dependency "dmadev" 00:03:11.210 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:11.210 Message: lib/power: Defining dependency "power" 00:03:11.210 Message: lib/reorder: Defining dependency "reorder" 00:03:11.210 Message: lib/security: Defining dependency "security" 00:03:11.210 Has header "linux/userfaultfd.h" : YES 00:03:11.210 Has header "linux/vduse.h" : YES 00:03:11.210 Message: lib/vhost: Defining dependency "vhost" 00:03:11.210 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:11.210 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:11.210 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:11.210 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:11.210 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:11.210 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:11.210 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:11.210 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:11.210 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:11.210 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:11.210 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:11.210 Configuring doxy-api-html.conf using configuration 00:03:11.210 Configuring doxy-api-man.conf using configuration 00:03:11.210 Program mandb found: YES (/usr/bin/mandb) 00:03:11.210 Program sphinx-build found: NO 00:03:11.210 Configuring rte_build_config.h using configuration 00:03:11.210 Message: 00:03:11.210 ================= 00:03:11.210 Applications Enabled 00:03:11.210 ================= 00:03:11.210 00:03:11.210 apps: 00:03:11.210 00:03:11.210 00:03:11.210 Message: 00:03:11.210 ================= 00:03:11.210 Libraries Enabled 00:03:11.210 ================= 00:03:11.210 00:03:11.210 libs: 00:03:11.210 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:11.210 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:11.210 cryptodev, dmadev, power, reorder, security, vhost, 00:03:11.210 00:03:11.210 Message: 00:03:11.210 =============== 00:03:11.210 Drivers Enabled 00:03:11.210 =============== 00:03:11.210 00:03:11.210 common: 00:03:11.210 00:03:11.210 bus: 00:03:11.210 pci, vdev, 00:03:11.210 mempool: 00:03:11.210 ring, 00:03:11.210 dma: 00:03:11.210 00:03:11.210 net: 00:03:11.210 00:03:11.210 crypto: 00:03:11.210 00:03:11.210 compress: 00:03:11.210 00:03:11.210 vdpa: 00:03:11.210 00:03:11.210 00:03:11.210 Message: 00:03:11.210 ================= 00:03:11.210 Content Skipped 00:03:11.210 ================= 00:03:11.210 00:03:11.210 apps: 00:03:11.210 dumpcap: explicitly disabled via build config 00:03:11.210 graph: explicitly disabled via build config 00:03:11.210 pdump: explicitly disabled via build config 00:03:11.210 proc-info: explicitly disabled via build config 00:03:11.210 test-acl: explicitly disabled via build config 00:03:11.210 test-bbdev: explicitly disabled via build config 00:03:11.210 test-cmdline: explicitly disabled via build config 00:03:11.210 test-compress-perf: explicitly disabled via build config 00:03:11.210 test-crypto-perf: explicitly disabled via build config 00:03:11.210 test-dma-perf: explicitly disabled via build config 00:03:11.210 test-eventdev: explicitly disabled via build config 00:03:11.210 test-fib: explicitly disabled via build config 00:03:11.210 test-flow-perf: explicitly disabled via build config 00:03:11.211 test-gpudev: explicitly disabled via build config 00:03:11.211 test-mldev: explicitly disabled via build config 00:03:11.211 test-pipeline: explicitly disabled via build config 00:03:11.211 test-pmd: explicitly disabled via build config 00:03:11.211 test-regex: explicitly disabled via build config 00:03:11.211 test-sad: explicitly disabled via build config 00:03:11.211 test-security-perf: explicitly disabled via build config 00:03:11.211 00:03:11.211 libs: 00:03:11.211 argparse: explicitly disabled via build config 00:03:11.211 metrics: explicitly disabled via build config 00:03:11.211 acl: explicitly disabled via build config 00:03:11.211 bbdev: explicitly disabled via build config 00:03:11.211 bitratestats: explicitly disabled via build config 00:03:11.211 bpf: explicitly disabled via build config 00:03:11.211 cfgfile: explicitly disabled via build config 00:03:11.211 distributor: explicitly disabled via build config 00:03:11.211 efd: explicitly disabled via build config 00:03:11.211 eventdev: explicitly disabled via build config 00:03:11.211 dispatcher: explicitly disabled via build config 00:03:11.211 gpudev: explicitly disabled via build config 00:03:11.211 gro: explicitly disabled via build config 00:03:11.211 gso: explicitly disabled via build config 00:03:11.211 ip_frag: explicitly disabled via build config 00:03:11.211 jobstats: explicitly disabled via build config 00:03:11.211 latencystats: explicitly disabled via build config 00:03:11.211 lpm: explicitly disabled via build config 00:03:11.211 member: explicitly disabled via build config 00:03:11.211 pcapng: explicitly disabled via build config 00:03:11.211 rawdev: explicitly disabled via build config 00:03:11.211 regexdev: explicitly disabled via build config 00:03:11.211 mldev: explicitly disabled via build config 00:03:11.211 rib: explicitly disabled via build config 00:03:11.211 sched: explicitly disabled via build config 00:03:11.211 stack: explicitly disabled via build config 00:03:11.211 ipsec: explicitly disabled via build config 00:03:11.211 pdcp: explicitly disabled via build config 00:03:11.211 fib: explicitly disabled via build config 00:03:11.211 port: explicitly disabled via build config 00:03:11.211 pdump: explicitly disabled via build config 00:03:11.211 table: explicitly disabled via build config 00:03:11.211 pipeline: explicitly disabled via build config 00:03:11.211 graph: explicitly disabled via build config 00:03:11.211 node: explicitly disabled via build config 00:03:11.211 00:03:11.211 drivers: 00:03:11.211 common/cpt: not in enabled drivers build config 00:03:11.211 common/dpaax: not in enabled drivers build config 00:03:11.211 common/iavf: not in enabled drivers build config 00:03:11.211 common/idpf: not in enabled drivers build config 00:03:11.211 common/ionic: not in enabled drivers build config 00:03:11.211 common/mvep: not in enabled drivers build config 00:03:11.211 common/octeontx: not in enabled drivers build config 00:03:11.211 bus/auxiliary: not in enabled drivers build config 00:03:11.211 bus/cdx: not in enabled drivers build config 00:03:11.211 bus/dpaa: not in enabled drivers build config 00:03:11.211 bus/fslmc: not in enabled drivers build config 00:03:11.211 bus/ifpga: not in enabled drivers build config 00:03:11.211 bus/platform: not in enabled drivers build config 00:03:11.211 bus/uacce: not in enabled drivers build config 00:03:11.211 bus/vmbus: not in enabled drivers build config 00:03:11.211 common/cnxk: not in enabled drivers build config 00:03:11.211 common/mlx5: not in enabled drivers build config 00:03:11.211 common/nfp: not in enabled drivers build config 00:03:11.211 common/nitrox: not in enabled drivers build config 00:03:11.211 common/qat: not in enabled drivers build config 00:03:11.211 common/sfc_efx: not in enabled drivers build config 00:03:11.211 mempool/bucket: not in enabled drivers build config 00:03:11.211 mempool/cnxk: not in enabled drivers build config 00:03:11.211 mempool/dpaa: not in enabled drivers build config 00:03:11.211 mempool/dpaa2: not in enabled drivers build config 00:03:11.211 mempool/octeontx: not in enabled drivers build config 00:03:11.211 mempool/stack: not in enabled drivers build config 00:03:11.211 dma/cnxk: not in enabled drivers build config 00:03:11.211 dma/dpaa: not in enabled drivers build config 00:03:11.211 dma/dpaa2: not in enabled drivers build config 00:03:11.211 dma/hisilicon: not in enabled drivers build config 00:03:11.211 dma/idxd: not in enabled drivers build config 00:03:11.211 dma/ioat: not in enabled drivers build config 00:03:11.211 dma/skeleton: not in enabled drivers build config 00:03:11.211 net/af_packet: not in enabled drivers build config 00:03:11.211 net/af_xdp: not in enabled drivers build config 00:03:11.211 net/ark: not in enabled drivers build config 00:03:11.211 net/atlantic: not in enabled drivers build config 00:03:11.211 net/avp: not in enabled drivers build config 00:03:11.211 net/axgbe: not in enabled drivers build config 00:03:11.211 net/bnx2x: not in enabled drivers build config 00:03:11.211 net/bnxt: not in enabled drivers build config 00:03:11.211 net/bonding: not in enabled drivers build config 00:03:11.211 net/cnxk: not in enabled drivers build config 00:03:11.211 net/cpfl: not in enabled drivers build config 00:03:11.211 net/cxgbe: not in enabled drivers build config 00:03:11.211 net/dpaa: not in enabled drivers build config 00:03:11.211 net/dpaa2: not in enabled drivers build config 00:03:11.211 net/e1000: not in enabled drivers build config 00:03:11.211 net/ena: not in enabled drivers build config 00:03:11.211 net/enetc: not in enabled drivers build config 00:03:11.211 net/enetfec: not in enabled drivers build config 00:03:11.211 net/enic: not in enabled drivers build config 00:03:11.211 net/failsafe: not in enabled drivers build config 00:03:11.211 net/fm10k: not in enabled drivers build config 00:03:11.211 net/gve: not in enabled drivers build config 00:03:11.211 net/hinic: not in enabled drivers build config 00:03:11.211 net/hns3: not in enabled drivers build config 00:03:11.211 net/i40e: not in enabled drivers build config 00:03:11.211 net/iavf: not in enabled drivers build config 00:03:11.211 net/ice: not in enabled drivers build config 00:03:11.211 net/idpf: not in enabled drivers build config 00:03:11.211 net/igc: not in enabled drivers build config 00:03:11.211 net/ionic: not in enabled drivers build config 00:03:11.211 net/ipn3ke: not in enabled drivers build config 00:03:11.211 net/ixgbe: not in enabled drivers build config 00:03:11.211 net/mana: not in enabled drivers build config 00:03:11.211 net/memif: not in enabled drivers build config 00:03:11.211 net/mlx4: not in enabled drivers build config 00:03:11.211 net/mlx5: not in enabled drivers build config 00:03:11.211 net/mvneta: not in enabled drivers build config 00:03:11.211 net/mvpp2: not in enabled drivers build config 00:03:11.211 net/netvsc: not in enabled drivers build config 00:03:11.211 net/nfb: not in enabled drivers build config 00:03:11.211 net/nfp: not in enabled drivers build config 00:03:11.211 net/ngbe: not in enabled drivers build config 00:03:11.211 net/null: not in enabled drivers build config 00:03:11.211 net/octeontx: not in enabled drivers build config 00:03:11.211 net/octeon_ep: not in enabled drivers build config 00:03:11.211 net/pcap: not in enabled drivers build config 00:03:11.211 net/pfe: not in enabled drivers build config 00:03:11.211 net/qede: not in enabled drivers build config 00:03:11.211 net/ring: not in enabled drivers build config 00:03:11.211 net/sfc: not in enabled drivers build config 00:03:11.211 net/softnic: not in enabled drivers build config 00:03:11.211 net/tap: not in enabled drivers build config 00:03:11.211 net/thunderx: not in enabled drivers build config 00:03:11.211 net/txgbe: not in enabled drivers build config 00:03:11.211 net/vdev_netvsc: not in enabled drivers build config 00:03:11.211 net/vhost: not in enabled drivers build config 00:03:11.211 net/virtio: not in enabled drivers build config 00:03:11.211 net/vmxnet3: not in enabled drivers build config 00:03:11.211 raw/*: missing internal dependency, "rawdev" 00:03:11.211 crypto/armv8: not in enabled drivers build config 00:03:11.211 crypto/bcmfs: not in enabled drivers build config 00:03:11.211 crypto/caam_jr: not in enabled drivers build config 00:03:11.211 crypto/ccp: not in enabled drivers build config 00:03:11.212 crypto/cnxk: not in enabled drivers build config 00:03:11.212 crypto/dpaa_sec: not in enabled drivers build config 00:03:11.212 crypto/dpaa2_sec: not in enabled drivers build config 00:03:11.212 crypto/ipsec_mb: not in enabled drivers build config 00:03:11.212 crypto/mlx5: not in enabled drivers build config 00:03:11.212 crypto/mvsam: not in enabled drivers build config 00:03:11.212 crypto/nitrox: not in enabled drivers build config 00:03:11.212 crypto/null: not in enabled drivers build config 00:03:11.212 crypto/octeontx: not in enabled drivers build config 00:03:11.212 crypto/openssl: not in enabled drivers build config 00:03:11.212 crypto/scheduler: not in enabled drivers build config 00:03:11.212 crypto/uadk: not in enabled drivers build config 00:03:11.212 crypto/virtio: not in enabled drivers build config 00:03:11.212 compress/isal: not in enabled drivers build config 00:03:11.212 compress/mlx5: not in enabled drivers build config 00:03:11.212 compress/nitrox: not in enabled drivers build config 00:03:11.212 compress/octeontx: not in enabled drivers build config 00:03:11.212 compress/zlib: not in enabled drivers build config 00:03:11.212 regex/*: missing internal dependency, "regexdev" 00:03:11.212 ml/*: missing internal dependency, "mldev" 00:03:11.212 vdpa/ifc: not in enabled drivers build config 00:03:11.212 vdpa/mlx5: not in enabled drivers build config 00:03:11.212 vdpa/nfp: not in enabled drivers build config 00:03:11.212 vdpa/sfc: not in enabled drivers build config 00:03:11.212 event/*: missing internal dependency, "eventdev" 00:03:11.212 baseband/*: missing internal dependency, "bbdev" 00:03:11.212 gpu/*: missing internal dependency, "gpudev" 00:03:11.212 00:03:11.212 00:03:11.212 Build targets in project: 85 00:03:11.212 00:03:11.212 DPDK 24.03.0 00:03:11.212 00:03:11.212 User defined options 00:03:11.212 buildtype : debug 00:03:11.212 default_library : shared 00:03:11.212 libdir : lib 00:03:11.212 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:11.212 b_sanitize : address 00:03:11.212 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:11.212 c_link_args : 00:03:11.212 cpu_instruction_set: native 00:03:11.212 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:11.212 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:11.212 enable_docs : false 00:03:11.212 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:11.212 enable_kmods : false 00:03:11.212 max_lcores : 128 00:03:11.212 tests : false 00:03:11.212 00:03:11.212 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:11.212 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:11.212 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:11.212 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.212 [3/268] Linking static target lib/librte_kvargs.a 00:03:11.212 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:11.212 [5/268] Linking static target lib/librte_log.a 00:03:11.212 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:11.212 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:11.212 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.212 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:11.212 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:11.212 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:11.212 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:11.212 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:11.212 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:11.212 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:11.212 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:11.212 [17/268] Linking static target lib/librte_telemetry.a 00:03:11.212 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:11.212 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:11.212 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:11.212 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.212 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:11.212 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:11.212 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:11.212 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:11.212 [26/268] Linking target lib/librte_log.so.24.1 00:03:11.212 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:11.472 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:11.472 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:11.472 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:11.472 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:11.732 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.732 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:11.732 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:11.732 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:11.732 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:11.732 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:11.732 [38/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:11.732 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:11.992 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:11.992 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:11.992 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:11.992 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:11.992 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:11.992 [45/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:12.252 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:12.253 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:12.253 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:12.512 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:12.512 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:12.512 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:12.512 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:12.771 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:12.771 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:12.771 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:12.771 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:12.771 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:13.031 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:13.031 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:13.031 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:13.031 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:13.031 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:13.031 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:13.291 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:13.291 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:13.291 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:13.291 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:13.549 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:13.549 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:13.808 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:13.808 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:13.808 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:13.808 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:13.808 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:13.808 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:13.808 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:13.808 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:13.808 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:13.808 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:14.067 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:14.326 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:14.326 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:14.326 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:14.326 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:14.326 [85/268] Linking static target lib/librte_eal.a 00:03:14.326 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:14.585 [87/268] Linking static target lib/librte_ring.a 00:03:14.585 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:14.585 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:14.585 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:14.585 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:14.844 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:14.844 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:14.844 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:14.844 [95/268] Linking static target lib/librte_rcu.a 00:03:14.844 [96/268] Linking static target lib/librte_mempool.a 00:03:14.844 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.105 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:15.105 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:15.105 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:15.105 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:15.375 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:15.375 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.375 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:15.375 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:15.375 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:15.375 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:15.376 [108/268] Linking static target lib/librte_net.a 00:03:15.376 [109/268] Linking static target lib/librte_meter.a 00:03:15.636 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:15.636 [111/268] Linking static target lib/librte_mbuf.a 00:03:15.896 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:15.896 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.896 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:15.896 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:15.896 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.896 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.155 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:16.414 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:16.414 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:16.414 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:16.673 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:16.673 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.931 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:16.931 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:16.931 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:16.931 [127/268] Linking static target lib/librte_pci.a 00:03:16.931 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:17.189 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.189 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:17.189 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:17.189 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:17.189 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:17.189 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:17.189 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:17.189 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.189 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:17.449 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:17.449 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:17.449 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:17.449 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:17.449 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:17.449 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:17.449 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:17.449 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.449 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.449 [147/268] Linking static target lib/librte_cmdline.a 00:03:17.707 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:17.707 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.967 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:17.967 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:17.967 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:17.967 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:17.967 [154/268] Linking static target lib/librte_timer.a 00:03:17.967 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:18.228 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:18.487 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:18.487 [158/268] Linking static target lib/librte_hash.a 00:03:18.487 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:18.487 [160/268] Linking static target lib/librte_compressdev.a 00:03:18.488 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:18.748 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.748 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:18.748 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:18.748 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:18.748 [166/268] Linking static target lib/librte_dmadev.a 00:03:18.748 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:18.748 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:19.008 [169/268] Linking static target lib/librte_ethdev.a 00:03:19.008 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.008 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:19.008 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:19.008 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.268 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:19.528 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:19.528 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:19.528 [177/268] Linking static target lib/librte_cryptodev.a 00:03:19.528 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.528 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.528 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:19.528 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:19.528 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:19.788 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.788 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.046 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.046 [186/268] Linking static target lib/librte_power.a 00:03:20.046 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:20.046 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:20.305 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.305 [190/268] Linking static target lib/librte_reorder.a 00:03:20.305 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:20.305 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.306 [193/268] Linking static target lib/librte_security.a 00:03:20.911 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:20.911 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.168 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.168 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.168 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.425 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:21.425 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:21.683 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:21.683 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:21.683 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:21.683 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:21.683 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:21.941 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.941 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:22.199 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:22.200 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:22.200 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:22.200 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:22.457 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:22.458 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:22.458 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.458 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.458 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:22.458 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.458 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.458 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:22.458 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.458 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:22.716 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:22.716 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.716 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.716 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:22.716 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.975 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.912 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.850 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.850 [230/268] Linking target lib/librte_eal.so.24.1 00:03:25.110 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:25.110 [232/268] Linking target lib/librte_meter.so.24.1 00:03:25.110 [233/268] Linking target lib/librte_pci.so.24.1 00:03:25.110 [234/268] Linking target lib/librte_ring.so.24.1 00:03:25.110 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:25.110 [236/268] Linking target lib/librte_timer.so.24.1 00:03:25.110 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:25.110 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:25.110 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:25.110 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:25.110 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:25.110 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:25.110 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:25.110 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:25.369 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:25.369 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:25.369 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:25.369 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:25.369 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:25.627 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:25.627 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:03:25.627 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:25.627 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:25.627 [254/268] Linking target lib/librte_net.so.24.1 00:03:25.886 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:25.886 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:25.886 [257/268] Linking target lib/librte_security.so.24.1 00:03:25.886 [258/268] Linking target lib/librte_hash.so.24.1 00:03:25.886 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:25.886 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:27.807 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.807 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:27.807 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:27.807 [264/268] Linking target lib/librte_power.so.24.1 00:03:28.745 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.745 [266/268] Linking static target lib/librte_vhost.a 00:03:31.281 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.281 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:31.281 INFO: autodetecting backend as ninja 00:03:31.281 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:49.373 CC lib/ut/ut.o 00:03:49.373 CC lib/ut_mock/mock.o 00:03:49.373 CC lib/log/log.o 00:03:49.373 CC lib/log/log_flags.o 00:03:49.373 CC lib/log/log_deprecated.o 00:03:49.373 LIB libspdk_ut_mock.a 00:03:49.373 LIB libspdk_ut.a 00:03:49.373 SO libspdk_ut_mock.so.6.0 00:03:49.373 LIB libspdk_log.a 00:03:49.373 SO libspdk_ut.so.2.0 00:03:49.373 SO libspdk_log.so.7.1 00:03:49.373 SYMLINK libspdk_ut_mock.so 00:03:49.373 SYMLINK libspdk_ut.so 00:03:49.373 SYMLINK libspdk_log.so 00:03:49.373 CXX lib/trace_parser/trace.o 00:03:49.373 CC lib/ioat/ioat.o 00:03:49.373 CC lib/dma/dma.o 00:03:49.373 CC lib/util/base64.o 00:03:49.373 CC lib/util/bit_array.o 00:03:49.373 CC lib/util/cpuset.o 00:03:49.373 CC lib/util/crc32.o 00:03:49.373 CC lib/util/crc32c.o 00:03:49.373 CC lib/util/crc16.o 00:03:49.373 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.373 CC lib/util/crc32_ieee.o 00:03:49.373 CC lib/util/crc64.o 00:03:49.373 CC lib/util/dif.o 00:03:49.373 CC lib/util/fd.o 00:03:49.373 LIB libspdk_dma.a 00:03:49.373 SO libspdk_dma.so.5.0 00:03:49.373 CC lib/util/fd_group.o 00:03:49.373 CC lib/util/file.o 00:03:49.373 LIB libspdk_ioat.a 00:03:49.373 CC lib/vfio_user/host/vfio_user.o 00:03:49.373 SYMLINK libspdk_dma.so 00:03:49.373 CC lib/util/hexlify.o 00:03:49.373 CC lib/util/iov.o 00:03:49.373 SO libspdk_ioat.so.7.0 00:03:49.373 CC lib/util/math.o 00:03:49.373 CC lib/util/net.o 00:03:49.373 SYMLINK libspdk_ioat.so 00:03:49.373 CC lib/util/pipe.o 00:03:49.373 CC lib/util/strerror_tls.o 00:03:49.373 CC lib/util/string.o 00:03:49.373 CC lib/util/uuid.o 00:03:49.373 LIB libspdk_vfio_user.a 00:03:49.373 CC lib/util/xor.o 00:03:49.373 CC lib/util/zipf.o 00:03:49.373 SO libspdk_vfio_user.so.5.0 00:03:49.373 CC lib/util/md5.o 00:03:49.373 SYMLINK libspdk_vfio_user.so 00:03:49.373 LIB libspdk_util.a 00:03:49.373 SO libspdk_util.so.10.1 00:03:49.373 LIB libspdk_trace_parser.a 00:03:49.373 SO libspdk_trace_parser.so.6.0 00:03:49.632 SYMLINK libspdk_util.so 00:03:49.632 SYMLINK libspdk_trace_parser.so 00:03:49.632 CC lib/vmd/vmd.o 00:03:49.632 CC lib/vmd/led.o 00:03:49.632 CC lib/rdma_utils/rdma_utils.o 00:03:49.632 CC lib/conf/conf.o 00:03:49.632 CC lib/idxd/idxd.o 00:03:49.632 CC lib/idxd/idxd_user.o 00:03:49.632 CC lib/idxd/idxd_kernel.o 00:03:49.632 CC lib/env_dpdk/memory.o 00:03:49.632 CC lib/env_dpdk/env.o 00:03:49.632 CC lib/json/json_parse.o 00:03:49.890 CC lib/env_dpdk/pci.o 00:03:49.890 CC lib/env_dpdk/init.o 00:03:49.890 LIB libspdk_conf.a 00:03:49.890 CC lib/env_dpdk/threads.o 00:03:49.890 SO libspdk_conf.so.6.0 00:03:49.890 CC lib/json/json_util.o 00:03:50.149 LIB libspdk_rdma_utils.a 00:03:50.149 SYMLINK libspdk_conf.so 00:03:50.149 CC lib/env_dpdk/pci_ioat.o 00:03:50.149 SO libspdk_rdma_utils.so.1.0 00:03:50.149 SYMLINK libspdk_rdma_utils.so 00:03:50.149 CC lib/env_dpdk/pci_virtio.o 00:03:50.149 CC lib/json/json_write.o 00:03:50.149 CC lib/env_dpdk/pci_vmd.o 00:03:50.149 CC lib/env_dpdk/pci_idxd.o 00:03:50.149 CC lib/env_dpdk/pci_event.o 00:03:50.149 CC lib/env_dpdk/sigbus_handler.o 00:03:50.149 CC lib/env_dpdk/pci_dpdk.o 00:03:50.149 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:50.410 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:50.410 LIB libspdk_json.a 00:03:50.410 LIB libspdk_idxd.a 00:03:50.410 SO libspdk_json.so.6.0 00:03:50.410 LIB libspdk_vmd.a 00:03:50.410 SO libspdk_idxd.so.12.1 00:03:50.410 SO libspdk_vmd.so.6.0 00:03:50.410 SYMLINK libspdk_json.so 00:03:50.410 SYMLINK libspdk_idxd.so 00:03:50.410 SYMLINK libspdk_vmd.so 00:03:50.674 CC lib/rdma_provider/common.o 00:03:50.674 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:50.674 LIB libspdk_rdma_provider.a 00:03:50.941 CC lib/jsonrpc/jsonrpc_server.o 00:03:50.941 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:50.941 CC lib/jsonrpc/jsonrpc_client.o 00:03:50.941 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:50.941 SO libspdk_rdma_provider.so.7.0 00:03:50.941 SYMLINK libspdk_rdma_provider.so 00:03:51.203 LIB libspdk_jsonrpc.a 00:03:51.203 SO libspdk_jsonrpc.so.6.0 00:03:51.203 SYMLINK libspdk_jsonrpc.so 00:03:51.203 LIB libspdk_env_dpdk.a 00:03:51.461 SO libspdk_env_dpdk.so.15.1 00:03:51.461 SYMLINK libspdk_env_dpdk.so 00:03:51.461 CC lib/rpc/rpc.o 00:03:51.720 LIB libspdk_rpc.a 00:03:51.720 SO libspdk_rpc.so.6.0 00:03:51.979 SYMLINK libspdk_rpc.so 00:03:52.237 CC lib/notify/notify.o 00:03:52.237 CC lib/notify/notify_rpc.o 00:03:52.237 CC lib/keyring/keyring.o 00:03:52.237 CC lib/keyring/keyring_rpc.o 00:03:52.237 CC lib/trace/trace_rpc.o 00:03:52.237 CC lib/trace/trace.o 00:03:52.237 CC lib/trace/trace_flags.o 00:03:52.497 LIB libspdk_notify.a 00:03:52.497 SO libspdk_notify.so.6.0 00:03:52.497 SYMLINK libspdk_notify.so 00:03:52.497 LIB libspdk_keyring.a 00:03:52.497 LIB libspdk_trace.a 00:03:52.497 SO libspdk_keyring.so.2.0 00:03:52.497 SO libspdk_trace.so.11.0 00:03:52.756 SYMLINK libspdk_keyring.so 00:03:52.756 SYMLINK libspdk_trace.so 00:03:53.013 CC lib/thread/thread.o 00:03:53.013 CC lib/thread/iobuf.o 00:03:53.013 CC lib/sock/sock.o 00:03:53.013 CC lib/sock/sock_rpc.o 00:03:53.581 LIB libspdk_sock.a 00:03:53.581 SO libspdk_sock.so.10.0 00:03:53.581 SYMLINK libspdk_sock.so 00:03:54.148 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:54.148 CC lib/nvme/nvme_ctrlr.o 00:03:54.148 CC lib/nvme/nvme_fabric.o 00:03:54.148 CC lib/nvme/nvme_ns_cmd.o 00:03:54.148 CC lib/nvme/nvme_ns.o 00:03:54.148 CC lib/nvme/nvme_pcie_common.o 00:03:54.148 CC lib/nvme/nvme_qpair.o 00:03:54.148 CC lib/nvme/nvme_pcie.o 00:03:54.148 CC lib/nvme/nvme.o 00:03:54.715 CC lib/nvme/nvme_quirks.o 00:03:54.715 LIB libspdk_thread.a 00:03:54.715 CC lib/nvme/nvme_transport.o 00:03:54.715 SO libspdk_thread.so.11.0 00:03:54.715 CC lib/nvme/nvme_discovery.o 00:03:54.715 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:54.715 SYMLINK libspdk_thread.so 00:03:54.715 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:54.975 CC lib/nvme/nvme_tcp.o 00:03:54.975 CC lib/nvme/nvme_opal.o 00:03:54.975 CC lib/accel/accel.o 00:03:54.975 CC lib/nvme/nvme_io_msg.o 00:03:54.975 CC lib/nvme/nvme_poll_group.o 00:03:55.233 CC lib/nvme/nvme_zns.o 00:03:55.233 CC lib/nvme/nvme_stubs.o 00:03:55.233 CC lib/nvme/nvme_auth.o 00:03:55.492 CC lib/nvme/nvme_cuse.o 00:03:55.492 CC lib/blob/blobstore.o 00:03:55.750 CC lib/accel/accel_rpc.o 00:03:55.751 CC lib/accel/accel_sw.o 00:03:55.751 CC lib/blob/request.o 00:03:56.009 CC lib/init/json_config.o 00:03:56.009 CC lib/virtio/virtio.o 00:03:56.009 CC lib/virtio/virtio_vhost_user.o 00:03:56.009 LIB libspdk_accel.a 00:03:56.009 CC lib/blob/zeroes.o 00:03:56.009 SO libspdk_accel.so.16.0 00:03:56.009 CC lib/init/subsystem.o 00:03:56.269 SYMLINK libspdk_accel.so 00:03:56.269 CC lib/blob/blob_bs_dev.o 00:03:56.269 CC lib/nvme/nvme_rdma.o 00:03:56.269 CC lib/virtio/virtio_vfio_user.o 00:03:56.269 CC lib/init/subsystem_rpc.o 00:03:56.269 CC lib/virtio/virtio_pci.o 00:03:56.269 CC lib/init/rpc.o 00:03:56.269 CC lib/fsdev/fsdev.o 00:03:56.269 CC lib/fsdev/fsdev_io.o 00:03:56.528 CC lib/fsdev/fsdev_rpc.o 00:03:56.528 LIB libspdk_init.a 00:03:56.528 SO libspdk_init.so.6.0 00:03:56.528 CC lib/bdev/bdev.o 00:03:56.528 CC lib/bdev/bdev_rpc.o 00:03:56.528 CC lib/bdev/bdev_zone.o 00:03:56.528 SYMLINK libspdk_init.so 00:03:56.528 CC lib/bdev/part.o 00:03:56.528 LIB libspdk_virtio.a 00:03:56.528 SO libspdk_virtio.so.7.0 00:03:56.528 CC lib/event/app.o 00:03:56.528 CC lib/bdev/scsi_nvme.o 00:03:56.786 SYMLINK libspdk_virtio.so 00:03:56.786 CC lib/event/reactor.o 00:03:56.786 CC lib/event/log_rpc.o 00:03:56.786 CC lib/event/app_rpc.o 00:03:56.786 CC lib/event/scheduler_static.o 00:03:57.045 LIB libspdk_fsdev.a 00:03:57.045 SO libspdk_fsdev.so.2.0 00:03:57.045 LIB libspdk_event.a 00:03:57.045 SYMLINK libspdk_fsdev.so 00:03:57.304 SO libspdk_event.so.14.0 00:03:57.304 SYMLINK libspdk_event.so 00:03:57.563 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:57.563 LIB libspdk_nvme.a 00:03:57.822 SO libspdk_nvme.so.15.0 00:03:58.081 SYMLINK libspdk_nvme.so 00:03:58.081 LIB libspdk_fuse_dispatcher.a 00:03:58.341 SO libspdk_fuse_dispatcher.so.1.0 00:03:58.341 SYMLINK libspdk_fuse_dispatcher.so 00:03:59.335 LIB libspdk_blob.a 00:03:59.335 SO libspdk_blob.so.11.0 00:03:59.335 SYMLINK libspdk_blob.so 00:03:59.593 LIB libspdk_bdev.a 00:03:59.593 SO libspdk_bdev.so.17.0 00:03:59.851 CC lib/lvol/lvol.o 00:03:59.851 CC lib/blobfs/blobfs.o 00:03:59.851 SYMLINK libspdk_bdev.so 00:03:59.851 CC lib/blobfs/tree.o 00:03:59.851 CC lib/ublk/ublk.o 00:03:59.851 CC lib/nbd/nbd.o 00:03:59.851 CC lib/ftl/ftl_core.o 00:03:59.851 CC lib/nbd/nbd_rpc.o 00:03:59.851 CC lib/ftl/ftl_init.o 00:03:59.851 CC lib/nvmf/ctrlr.o 00:03:59.851 CC lib/nvmf/ctrlr_discovery.o 00:03:59.851 CC lib/scsi/dev.o 00:04:00.108 CC lib/scsi/lun.o 00:04:00.108 CC lib/ftl/ftl_layout.o 00:04:00.108 CC lib/ftl/ftl_debug.o 00:04:00.366 CC lib/ublk/ublk_rpc.o 00:04:00.366 LIB libspdk_nbd.a 00:04:00.366 SO libspdk_nbd.so.7.0 00:04:00.366 CC lib/scsi/port.o 00:04:00.366 CC lib/scsi/scsi.o 00:04:00.366 SYMLINK libspdk_nbd.so 00:04:00.366 CC lib/ftl/ftl_io.o 00:04:00.366 CC lib/scsi/scsi_bdev.o 00:04:00.366 CC lib/scsi/scsi_pr.o 00:04:00.625 CC lib/scsi/scsi_rpc.o 00:04:00.625 CC lib/scsi/task.o 00:04:00.625 CC lib/nvmf/ctrlr_bdev.o 00:04:00.625 LIB libspdk_ublk.a 00:04:00.625 SO libspdk_ublk.so.3.0 00:04:00.625 CC lib/ftl/ftl_sb.o 00:04:00.625 LIB libspdk_blobfs.a 00:04:00.625 SYMLINK libspdk_ublk.so 00:04:00.625 SO libspdk_blobfs.so.10.0 00:04:00.625 CC lib/nvmf/subsystem.o 00:04:00.625 CC lib/ftl/ftl_l2p.o 00:04:00.885 SYMLINK libspdk_blobfs.so 00:04:00.885 LIB libspdk_lvol.a 00:04:00.885 CC lib/nvmf/nvmf.o 00:04:00.885 CC lib/ftl/ftl_l2p_flat.o 00:04:00.885 CC lib/ftl/ftl_nv_cache.o 00:04:00.885 SO libspdk_lvol.so.10.0 00:04:00.885 SYMLINK libspdk_lvol.so 00:04:00.885 CC lib/ftl/ftl_band.o 00:04:00.885 CC lib/ftl/ftl_band_ops.o 00:04:00.885 CC lib/nvmf/nvmf_rpc.o 00:04:01.145 CC lib/ftl/ftl_writer.o 00:04:01.145 LIB libspdk_scsi.a 00:04:01.145 SO libspdk_scsi.so.9.0 00:04:01.145 SYMLINK libspdk_scsi.so 00:04:01.145 CC lib/nvmf/transport.o 00:04:01.145 CC lib/ftl/ftl_rq.o 00:04:01.403 CC lib/nvmf/tcp.o 00:04:01.403 CC lib/iscsi/conn.o 00:04:01.403 CC lib/ftl/ftl_reloc.o 00:04:01.403 CC lib/vhost/vhost.o 00:04:01.662 CC lib/vhost/vhost_rpc.o 00:04:01.662 CC lib/iscsi/init_grp.o 00:04:01.920 CC lib/vhost/vhost_scsi.o 00:04:01.920 CC lib/ftl/ftl_l2p_cache.o 00:04:01.920 CC lib/nvmf/stubs.o 00:04:01.920 CC lib/nvmf/mdns_server.o 00:04:01.920 CC lib/nvmf/rdma.o 00:04:02.179 CC lib/iscsi/iscsi.o 00:04:02.179 CC lib/iscsi/param.o 00:04:02.179 CC lib/vhost/vhost_blk.o 00:04:02.440 CC lib/nvmf/auth.o 00:04:02.440 CC lib/vhost/rte_vhost_user.o 00:04:02.440 CC lib/iscsi/portal_grp.o 00:04:02.440 CC lib/ftl/ftl_p2l.o 00:04:02.440 CC lib/ftl/ftl_p2l_log.o 00:04:02.699 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.699 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.699 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.699 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.958 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.958 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.958 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.958 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:03.217 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:03.217 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:03.217 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:03.217 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:03.217 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:03.217 CC lib/ftl/utils/ftl_conf.o 00:04:03.217 CC lib/ftl/utils/ftl_md.o 00:04:03.217 CC lib/ftl/utils/ftl_mempool.o 00:04:03.476 CC lib/ftl/utils/ftl_bitmap.o 00:04:03.476 CC lib/ftl/utils/ftl_property.o 00:04:03.476 LIB libspdk_vhost.a 00:04:03.476 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:03.476 SO libspdk_vhost.so.8.0 00:04:03.476 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:03.476 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:03.476 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:03.476 SYMLINK libspdk_vhost.so 00:04:03.476 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:03.476 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:03.476 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:03.735 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.735 CC lib/iscsi/tgt_node.o 00:04:03.735 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.735 CC lib/iscsi/iscsi_subsystem.o 00:04:03.735 CC lib/iscsi/iscsi_rpc.o 00:04:03.735 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.735 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.735 CC lib/iscsi/task.o 00:04:03.735 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:03.735 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:03.994 CC lib/ftl/base/ftl_base_dev.o 00:04:03.994 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.994 CC lib/ftl/ftl_trace.o 00:04:04.253 LIB libspdk_iscsi.a 00:04:04.253 LIB libspdk_ftl.a 00:04:04.253 SO libspdk_iscsi.so.8.0 00:04:04.512 SYMLINK libspdk_iscsi.so 00:04:04.512 SO libspdk_ftl.so.9.0 00:04:04.512 LIB libspdk_nvmf.a 00:04:04.770 SO libspdk_nvmf.so.20.0 00:04:04.770 SYMLINK libspdk_ftl.so 00:04:05.029 SYMLINK libspdk_nvmf.so 00:04:05.289 CC module/env_dpdk/env_dpdk_rpc.o 00:04:05.289 CC module/blob/bdev/blob_bdev.o 00:04:05.289 CC module/fsdev/aio/fsdev_aio.o 00:04:05.289 CC module/accel/error/accel_error.o 00:04:05.289 CC module/accel/ioat/accel_ioat.o 00:04:05.289 CC module/keyring/linux/keyring.o 00:04:05.289 CC module/accel/dsa/accel_dsa.o 00:04:05.289 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:05.289 CC module/keyring/file/keyring.o 00:04:05.289 CC module/sock/posix/posix.o 00:04:05.289 LIB libspdk_env_dpdk_rpc.a 00:04:05.548 SO libspdk_env_dpdk_rpc.so.6.0 00:04:05.548 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.548 CC module/keyring/linux/keyring_rpc.o 00:04:05.548 CC module/accel/ioat/accel_ioat_rpc.o 00:04:05.548 CC module/keyring/file/keyring_rpc.o 00:04:05.548 LIB libspdk_scheduler_dynamic.a 00:04:05.548 CC module/accel/error/accel_error_rpc.o 00:04:05.548 SO libspdk_scheduler_dynamic.so.4.0 00:04:05.548 LIB libspdk_keyring_linux.a 00:04:05.548 LIB libspdk_accel_ioat.a 00:04:05.548 LIB libspdk_blob_bdev.a 00:04:05.548 SYMLINK libspdk_scheduler_dynamic.so 00:04:05.548 LIB libspdk_keyring_file.a 00:04:05.548 CC module/accel/dsa/accel_dsa_rpc.o 00:04:05.548 SO libspdk_keyring_linux.so.1.0 00:04:05.548 SO libspdk_blob_bdev.so.11.0 00:04:05.548 SO libspdk_accel_ioat.so.6.0 00:04:05.548 SO libspdk_keyring_file.so.2.0 00:04:05.807 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.807 SYMLINK libspdk_keyring_linux.so 00:04:05.807 LIB libspdk_accel_error.a 00:04:05.807 SYMLINK libspdk_accel_ioat.so 00:04:05.807 SYMLINK libspdk_blob_bdev.so 00:04:05.807 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:05.807 SYMLINK libspdk_keyring_file.so 00:04:05.807 SO libspdk_accel_error.so.2.0 00:04:05.807 LIB libspdk_accel_dsa.a 00:04:05.807 SO libspdk_accel_dsa.so.5.0 00:04:05.807 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.807 SYMLINK libspdk_accel_error.so 00:04:05.807 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.807 CC module/fsdev/aio/linux_aio_mgr.o 00:04:05.807 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:05.807 CC module/accel/iaa/accel_iaa.o 00:04:05.807 SYMLINK libspdk_accel_dsa.so 00:04:05.807 CC module/accel/iaa/accel_iaa_rpc.o 00:04:06.065 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:06.065 CC module/bdev/delay/vbdev_delay.o 00:04:06.065 LIB libspdk_scheduler_gscheduler.a 00:04:06.065 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.065 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:06.065 SO libspdk_scheduler_gscheduler.so.4.0 00:04:06.065 CC module/bdev/error/vbdev_error.o 00:04:06.065 CC module/bdev/error/vbdev_error_rpc.o 00:04:06.065 SYMLINK libspdk_scheduler_gscheduler.so 00:04:06.065 LIB libspdk_fsdev_aio.a 00:04:06.065 LIB libspdk_accel_iaa.a 00:04:06.065 CC module/bdev/gpt/gpt.o 00:04:06.065 SO libspdk_accel_iaa.so.3.0 00:04:06.065 SO libspdk_fsdev_aio.so.1.0 00:04:06.065 LIB libspdk_sock_posix.a 00:04:06.065 SYMLINK libspdk_accel_iaa.so 00:04:06.065 LIB libspdk_blobfs_bdev.a 00:04:06.065 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:06.065 SYMLINK libspdk_fsdev_aio.so 00:04:06.326 SO libspdk_sock_posix.so.6.0 00:04:06.326 SO libspdk_blobfs_bdev.so.6.0 00:04:06.326 CC module/bdev/lvol/vbdev_lvol.o 00:04:06.326 CC module/bdev/gpt/vbdev_gpt.o 00:04:06.326 SYMLINK libspdk_blobfs_bdev.so 00:04:06.326 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:06.326 SYMLINK libspdk_sock_posix.so 00:04:06.326 LIB libspdk_bdev_error.a 00:04:06.326 CC module/bdev/malloc/bdev_malloc.o 00:04:06.326 SO libspdk_bdev_error.so.6.0 00:04:06.326 CC module/bdev/null/bdev_null.o 00:04:06.326 LIB libspdk_bdev_delay.a 00:04:06.326 CC module/bdev/null/bdev_null_rpc.o 00:04:06.326 CC module/bdev/nvme/bdev_nvme.o 00:04:06.326 SO libspdk_bdev_delay.so.6.0 00:04:06.326 SYMLINK libspdk_bdev_error.so 00:04:06.326 CC module/bdev/passthru/vbdev_passthru.o 00:04:06.326 SYMLINK libspdk_bdev_delay.so 00:04:06.326 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:06.585 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:06.585 CC module/bdev/raid/bdev_raid.o 00:04:06.585 LIB libspdk_bdev_gpt.a 00:04:06.585 SO libspdk_bdev_gpt.so.6.0 00:04:06.585 LIB libspdk_bdev_null.a 00:04:06.585 CC module/bdev/raid/bdev_raid_rpc.o 00:04:06.585 SYMLINK libspdk_bdev_gpt.so 00:04:06.585 SO libspdk_bdev_null.so.6.0 00:04:06.585 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:06.585 SYMLINK libspdk_bdev_null.so 00:04:06.585 CC module/bdev/nvme/nvme_rpc.o 00:04:06.585 LIB libspdk_bdev_malloc.a 00:04:06.845 SO libspdk_bdev_malloc.so.6.0 00:04:06.845 CC module/bdev/split/vbdev_split.o 00:04:06.845 CC module/bdev/raid/bdev_raid_sb.o 00:04:06.845 LIB libspdk_bdev_lvol.a 00:04:06.845 SYMLINK libspdk_bdev_malloc.so 00:04:06.845 CC module/bdev/split/vbdev_split_rpc.o 00:04:06.845 LIB libspdk_bdev_passthru.a 00:04:06.845 SO libspdk_bdev_lvol.so.6.0 00:04:06.845 CC module/bdev/nvme/bdev_mdns_client.o 00:04:06.845 SO libspdk_bdev_passthru.so.6.0 00:04:06.845 SYMLINK libspdk_bdev_lvol.so 00:04:06.845 SYMLINK libspdk_bdev_passthru.so 00:04:06.845 CC module/bdev/nvme/vbdev_opal.o 00:04:06.845 LIB libspdk_bdev_split.a 00:04:06.845 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:07.105 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:07.105 SO libspdk_bdev_split.so.6.0 00:04:07.105 CC module/bdev/aio/bdev_aio.o 00:04:07.105 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:07.105 SYMLINK libspdk_bdev_split.so 00:04:07.105 CC module/bdev/ftl/bdev_ftl.o 00:04:07.105 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:07.105 CC module/bdev/aio/bdev_aio_rpc.o 00:04:07.105 CC module/bdev/raid/raid0.o 00:04:07.365 CC module/bdev/iscsi/bdev_iscsi.o 00:04:07.365 LIB libspdk_bdev_zone_block.a 00:04:07.365 SO libspdk_bdev_zone_block.so.6.0 00:04:07.365 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:07.365 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:07.365 LIB libspdk_bdev_aio.a 00:04:07.365 LIB libspdk_bdev_ftl.a 00:04:07.365 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:07.365 SYMLINK libspdk_bdev_zone_block.so 00:04:07.365 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:07.365 SO libspdk_bdev_ftl.so.6.0 00:04:07.365 SO libspdk_bdev_aio.so.6.0 00:04:07.365 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:07.365 SYMLINK libspdk_bdev_ftl.so 00:04:07.365 CC module/bdev/raid/raid1.o 00:04:07.365 SYMLINK libspdk_bdev_aio.so 00:04:07.625 CC module/bdev/raid/concat.o 00:04:07.625 CC module/bdev/raid/raid5f.o 00:04:07.625 LIB libspdk_bdev_iscsi.a 00:04:07.625 SO libspdk_bdev_iscsi.so.6.0 00:04:07.885 SYMLINK libspdk_bdev_iscsi.so 00:04:07.885 LIB libspdk_bdev_virtio.a 00:04:07.885 SO libspdk_bdev_virtio.so.6.0 00:04:08.145 SYMLINK libspdk_bdev_virtio.so 00:04:08.145 LIB libspdk_bdev_raid.a 00:04:08.145 SO libspdk_bdev_raid.so.6.0 00:04:08.405 SYMLINK libspdk_bdev_raid.so 00:04:08.974 LIB libspdk_bdev_nvme.a 00:04:08.974 SO libspdk_bdev_nvme.so.7.1 00:04:09.234 SYMLINK libspdk_bdev_nvme.so 00:04:09.804 CC module/event/subsystems/sock/sock.o 00:04:09.804 CC module/event/subsystems/iobuf/iobuf.o 00:04:09.804 CC module/event/subsystems/scheduler/scheduler.o 00:04:09.804 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:09.804 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:09.804 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:09.804 CC module/event/subsystems/vmd/vmd.o 00:04:09.804 CC module/event/subsystems/fsdev/fsdev.o 00:04:09.804 CC module/event/subsystems/keyring/keyring.o 00:04:10.064 LIB libspdk_event_scheduler.a 00:04:10.064 LIB libspdk_event_fsdev.a 00:04:10.064 LIB libspdk_event_iobuf.a 00:04:10.064 LIB libspdk_event_keyring.a 00:04:10.064 LIB libspdk_event_vhost_blk.a 00:04:10.064 SO libspdk_event_fsdev.so.1.0 00:04:10.064 SO libspdk_event_scheduler.so.4.0 00:04:10.064 SO libspdk_event_keyring.so.1.0 00:04:10.064 SO libspdk_event_iobuf.so.3.0 00:04:10.064 LIB libspdk_event_sock.a 00:04:10.064 SO libspdk_event_vhost_blk.so.3.0 00:04:10.064 LIB libspdk_event_vmd.a 00:04:10.064 SO libspdk_event_sock.so.5.0 00:04:10.064 SYMLINK libspdk_event_fsdev.so 00:04:10.064 SYMLINK libspdk_event_scheduler.so 00:04:10.064 SYMLINK libspdk_event_keyring.so 00:04:10.064 SYMLINK libspdk_event_iobuf.so 00:04:10.064 SO libspdk_event_vmd.so.6.0 00:04:10.064 SYMLINK libspdk_event_vhost_blk.so 00:04:10.064 SYMLINK libspdk_event_sock.so 00:04:10.064 SYMLINK libspdk_event_vmd.so 00:04:10.324 CC module/event/subsystems/accel/accel.o 00:04:10.584 LIB libspdk_event_accel.a 00:04:10.584 SO libspdk_event_accel.so.6.0 00:04:10.584 SYMLINK libspdk_event_accel.so 00:04:11.154 CC module/event/subsystems/bdev/bdev.o 00:04:11.414 LIB libspdk_event_bdev.a 00:04:11.414 SO libspdk_event_bdev.so.6.0 00:04:11.414 SYMLINK libspdk_event_bdev.so 00:04:11.673 CC module/event/subsystems/nbd/nbd.o 00:04:11.673 CC module/event/subsystems/ublk/ublk.o 00:04:11.673 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:11.673 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:11.673 CC module/event/subsystems/scsi/scsi.o 00:04:11.932 LIB libspdk_event_ublk.a 00:04:11.932 LIB libspdk_event_nbd.a 00:04:11.932 LIB libspdk_event_scsi.a 00:04:11.932 SO libspdk_event_ublk.so.3.0 00:04:11.932 SO libspdk_event_nbd.so.6.0 00:04:11.932 SO libspdk_event_scsi.so.6.0 00:04:11.932 LIB libspdk_event_nvmf.a 00:04:11.932 SYMLINK libspdk_event_nbd.so 00:04:11.932 SYMLINK libspdk_event_ublk.so 00:04:11.932 SYMLINK libspdk_event_scsi.so 00:04:12.192 SO libspdk_event_nvmf.so.6.0 00:04:12.192 SYMLINK libspdk_event_nvmf.so 00:04:12.452 CC module/event/subsystems/iscsi/iscsi.o 00:04:12.452 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:12.452 LIB libspdk_event_iscsi.a 00:04:12.452 LIB libspdk_event_vhost_scsi.a 00:04:12.712 SO libspdk_event_iscsi.so.6.0 00:04:12.712 SO libspdk_event_vhost_scsi.so.3.0 00:04:12.712 SYMLINK libspdk_event_iscsi.so 00:04:12.712 SYMLINK libspdk_event_vhost_scsi.so 00:04:12.984 SO libspdk.so.6.0 00:04:12.984 SYMLINK libspdk.so 00:04:13.263 CXX app/trace/trace.o 00:04:13.263 CC app/spdk_lspci/spdk_lspci.o 00:04:13.264 CC app/trace_record/trace_record.o 00:04:13.264 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:13.264 CC app/nvmf_tgt/nvmf_main.o 00:04:13.264 CC app/iscsi_tgt/iscsi_tgt.o 00:04:13.264 CC examples/util/zipf/zipf.o 00:04:13.264 CC app/spdk_tgt/spdk_tgt.o 00:04:13.264 CC examples/ioat/perf/perf.o 00:04:13.264 CC test/thread/poller_perf/poller_perf.o 00:04:13.264 LINK spdk_lspci 00:04:13.524 LINK nvmf_tgt 00:04:13.524 LINK zipf 00:04:13.524 LINK interrupt_tgt 00:04:13.524 LINK iscsi_tgt 00:04:13.524 LINK poller_perf 00:04:13.524 LINK spdk_tgt 00:04:13.524 LINK spdk_trace_record 00:04:13.524 LINK ioat_perf 00:04:13.524 CC app/spdk_nvme_perf/perf.o 00:04:13.524 LINK spdk_trace 00:04:13.783 TEST_HEADER include/spdk/accel.h 00:04:13.783 CC app/spdk_nvme_identify/identify.o 00:04:13.783 TEST_HEADER include/spdk/accel_module.h 00:04:13.783 TEST_HEADER include/spdk/assert.h 00:04:13.783 TEST_HEADER include/spdk/barrier.h 00:04:13.783 TEST_HEADER include/spdk/base64.h 00:04:13.783 TEST_HEADER include/spdk/bdev.h 00:04:13.783 TEST_HEADER include/spdk/bdev_module.h 00:04:13.784 TEST_HEADER include/spdk/bdev_zone.h 00:04:13.784 TEST_HEADER include/spdk/bit_array.h 00:04:13.784 TEST_HEADER include/spdk/bit_pool.h 00:04:13.784 TEST_HEADER include/spdk/blob_bdev.h 00:04:13.784 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:13.784 TEST_HEADER include/spdk/blobfs.h 00:04:13.784 TEST_HEADER include/spdk/blob.h 00:04:13.784 TEST_HEADER include/spdk/conf.h 00:04:13.784 TEST_HEADER include/spdk/config.h 00:04:13.784 TEST_HEADER include/spdk/cpuset.h 00:04:13.784 TEST_HEADER include/spdk/crc16.h 00:04:13.784 CC examples/ioat/verify/verify.o 00:04:13.784 TEST_HEADER include/spdk/crc32.h 00:04:13.784 TEST_HEADER include/spdk/crc64.h 00:04:13.784 TEST_HEADER include/spdk/dif.h 00:04:13.784 TEST_HEADER include/spdk/dma.h 00:04:13.784 TEST_HEADER include/spdk/endian.h 00:04:13.784 TEST_HEADER include/spdk/env_dpdk.h 00:04:13.784 TEST_HEADER include/spdk/env.h 00:04:13.784 TEST_HEADER include/spdk/event.h 00:04:13.784 TEST_HEADER include/spdk/fd_group.h 00:04:13.784 TEST_HEADER include/spdk/fd.h 00:04:13.784 TEST_HEADER include/spdk/file.h 00:04:13.784 TEST_HEADER include/spdk/fsdev.h 00:04:13.784 TEST_HEADER include/spdk/fsdev_module.h 00:04:13.784 TEST_HEADER include/spdk/ftl.h 00:04:13.784 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:13.784 TEST_HEADER include/spdk/gpt_spec.h 00:04:13.784 TEST_HEADER include/spdk/hexlify.h 00:04:13.784 TEST_HEADER include/spdk/histogram_data.h 00:04:13.784 TEST_HEADER include/spdk/idxd.h 00:04:13.784 TEST_HEADER include/spdk/idxd_spec.h 00:04:13.784 TEST_HEADER include/spdk/init.h 00:04:13.784 TEST_HEADER include/spdk/ioat.h 00:04:13.784 TEST_HEADER include/spdk/ioat_spec.h 00:04:13.784 TEST_HEADER include/spdk/iscsi_spec.h 00:04:13.784 CC test/dma/test_dma/test_dma.o 00:04:13.784 TEST_HEADER include/spdk/json.h 00:04:13.784 TEST_HEADER include/spdk/jsonrpc.h 00:04:13.784 TEST_HEADER include/spdk/keyring.h 00:04:13.784 TEST_HEADER include/spdk/keyring_module.h 00:04:13.784 TEST_HEADER include/spdk/likely.h 00:04:13.784 TEST_HEADER include/spdk/log.h 00:04:13.784 TEST_HEADER include/spdk/lvol.h 00:04:13.784 TEST_HEADER include/spdk/md5.h 00:04:13.784 TEST_HEADER include/spdk/memory.h 00:04:13.784 TEST_HEADER include/spdk/mmio.h 00:04:13.784 TEST_HEADER include/spdk/nbd.h 00:04:13.784 TEST_HEADER include/spdk/net.h 00:04:13.784 TEST_HEADER include/spdk/notify.h 00:04:13.784 TEST_HEADER include/spdk/nvme.h 00:04:13.784 TEST_HEADER include/spdk/nvme_intel.h 00:04:13.784 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:13.784 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:13.784 TEST_HEADER include/spdk/nvme_spec.h 00:04:13.784 TEST_HEADER include/spdk/nvme_zns.h 00:04:13.784 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:13.784 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:13.784 TEST_HEADER include/spdk/nvmf.h 00:04:13.784 CC test/app/bdev_svc/bdev_svc.o 00:04:13.784 TEST_HEADER include/spdk/nvmf_spec.h 00:04:13.784 TEST_HEADER include/spdk/nvmf_transport.h 00:04:13.784 TEST_HEADER include/spdk/opal.h 00:04:13.784 TEST_HEADER include/spdk/opal_spec.h 00:04:13.784 TEST_HEADER include/spdk/pci_ids.h 00:04:13.784 TEST_HEADER include/spdk/pipe.h 00:04:13.784 TEST_HEADER include/spdk/queue.h 00:04:13.784 TEST_HEADER include/spdk/reduce.h 00:04:13.784 TEST_HEADER include/spdk/rpc.h 00:04:13.784 TEST_HEADER include/spdk/scheduler.h 00:04:13.784 TEST_HEADER include/spdk/scsi.h 00:04:13.784 CC examples/sock/hello_world/hello_sock.o 00:04:13.784 TEST_HEADER include/spdk/scsi_spec.h 00:04:13.784 TEST_HEADER include/spdk/sock.h 00:04:13.784 TEST_HEADER include/spdk/stdinc.h 00:04:13.784 CC examples/thread/thread/thread_ex.o 00:04:13.784 TEST_HEADER include/spdk/string.h 00:04:13.784 TEST_HEADER include/spdk/thread.h 00:04:13.784 TEST_HEADER include/spdk/trace.h 00:04:13.784 TEST_HEADER include/spdk/trace_parser.h 00:04:13.784 TEST_HEADER include/spdk/tree.h 00:04:13.784 TEST_HEADER include/spdk/ublk.h 00:04:13.784 TEST_HEADER include/spdk/util.h 00:04:13.784 TEST_HEADER include/spdk/uuid.h 00:04:13.784 TEST_HEADER include/spdk/version.h 00:04:13.784 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:13.784 CC test/env/mem_callbacks/mem_callbacks.o 00:04:13.784 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:13.784 TEST_HEADER include/spdk/vhost.h 00:04:13.784 TEST_HEADER include/spdk/vmd.h 00:04:13.784 TEST_HEADER include/spdk/xor.h 00:04:13.784 TEST_HEADER include/spdk/zipf.h 00:04:13.784 CXX test/cpp_headers/accel.o 00:04:14.044 CC examples/vmd/lsvmd/lsvmd.o 00:04:14.044 LINK verify 00:04:14.044 LINK bdev_svc 00:04:14.044 CXX test/cpp_headers/accel_module.o 00:04:14.044 LINK lsvmd 00:04:14.044 LINK hello_sock 00:04:14.044 LINK thread 00:04:14.044 CXX test/cpp_headers/assert.o 00:04:14.303 CC examples/vmd/led/led.o 00:04:14.304 CXX test/cpp_headers/barrier.o 00:04:14.304 LINK test_dma 00:04:14.304 CC test/env/vtophys/vtophys.o 00:04:14.304 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.304 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.304 LINK mem_callbacks 00:04:14.304 LINK led 00:04:14.562 CXX test/cpp_headers/base64.o 00:04:14.563 CC test/event/event_perf/event_perf.o 00:04:14.563 LINK vtophys 00:04:14.563 LINK spdk_nvme_perf 00:04:14.563 LINK spdk_nvme_discover 00:04:14.563 CC test/event/reactor_perf/reactor_perf.o 00:04:14.563 CC test/event/reactor/reactor.o 00:04:14.563 CXX test/cpp_headers/bdev.o 00:04:14.563 LINK event_perf 00:04:14.563 LINK spdk_nvme_identify 00:04:14.821 LINK reactor_perf 00:04:14.821 LINK reactor 00:04:14.821 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:14.821 CC test/env/memory/memory_ut.o 00:04:14.821 CXX test/cpp_headers/bdev_module.o 00:04:14.821 CC examples/idxd/perf/perf.o 00:04:14.821 CXX test/cpp_headers/bdev_zone.o 00:04:14.821 LINK nvme_fuzz 00:04:14.821 CC test/env/pci/pci_ut.o 00:04:14.821 CXX test/cpp_headers/bit_array.o 00:04:14.821 CC app/spdk_top/spdk_top.o 00:04:14.821 LINK env_dpdk_post_init 00:04:15.080 CC test/event/app_repeat/app_repeat.o 00:04:15.080 CXX test/cpp_headers/bit_pool.o 00:04:15.080 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:15.080 CC app/vhost/vhost.o 00:04:15.080 CC test/event/scheduler/scheduler.o 00:04:15.080 LINK idxd_perf 00:04:15.080 LINK app_repeat 00:04:15.080 CC app/spdk_dd/spdk_dd.o 00:04:15.080 CXX test/cpp_headers/blob_bdev.o 00:04:15.080 LINK pci_ut 00:04:15.340 LINK vhost 00:04:15.340 LINK scheduler 00:04:15.340 CXX test/cpp_headers/blobfs_bdev.o 00:04:15.340 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:15.340 CC examples/accel/perf/accel_perf.o 00:04:15.598 LINK spdk_dd 00:04:15.598 CXX test/cpp_headers/blobfs.o 00:04:15.598 CC examples/nvme/hello_world/hello_world.o 00:04:15.598 CC examples/blob/hello_world/hello_blob.o 00:04:15.598 CC app/fio/nvme/fio_plugin.o 00:04:15.598 CXX test/cpp_headers/blob.o 00:04:15.598 LINK hello_fsdev 00:04:15.858 CC test/app/histogram_perf/histogram_perf.o 00:04:15.858 CXX test/cpp_headers/conf.o 00:04:15.858 LINK hello_world 00:04:15.858 LINK hello_blob 00:04:15.858 LINK spdk_top 00:04:15.858 LINK histogram_perf 00:04:15.858 LINK accel_perf 00:04:15.858 CC test/app/jsoncat/jsoncat.o 00:04:15.858 CXX test/cpp_headers/config.o 00:04:15.858 LINK memory_ut 00:04:15.858 CXX test/cpp_headers/cpuset.o 00:04:16.118 LINK jsoncat 00:04:16.118 CC examples/nvme/reconnect/reconnect.o 00:04:16.118 CC test/app/stub/stub.o 00:04:16.118 CXX test/cpp_headers/crc16.o 00:04:16.118 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:16.118 CC examples/blob/cli/blobcli.o 00:04:16.118 CC examples/nvme/arbitration/arbitration.o 00:04:16.118 CXX test/cpp_headers/crc32.o 00:04:16.118 LINK spdk_nvme 00:04:16.118 LINK stub 00:04:16.118 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:16.377 CXX test/cpp_headers/crc64.o 00:04:16.377 CC app/fio/bdev/fio_plugin.o 00:04:16.377 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:16.377 LINK reconnect 00:04:16.377 CC examples/bdev/hello_world/hello_bdev.o 00:04:16.377 LINK arbitration 00:04:16.377 CXX test/cpp_headers/dif.o 00:04:16.377 CC examples/bdev/bdevperf/bdevperf.o 00:04:16.636 CC examples/nvme/hotplug/hotplug.o 00:04:16.636 CXX test/cpp_headers/dma.o 00:04:16.636 LINK hello_bdev 00:04:16.636 LINK blobcli 00:04:16.636 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.636 LINK nvme_manage 00:04:16.636 CXX test/cpp_headers/endian.o 00:04:16.896 LINK iscsi_fuzz 00:04:16.896 LINK hotplug 00:04:16.896 LINK spdk_bdev 00:04:16.896 LINK cmb_copy 00:04:16.896 LINK vhost_fuzz 00:04:16.896 CXX test/cpp_headers/env_dpdk.o 00:04:16.896 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.896 CC examples/nvme/abort/abort.o 00:04:16.896 CC test/rpc_client/rpc_client_test.o 00:04:16.896 CXX test/cpp_headers/env.o 00:04:16.896 CXX test/cpp_headers/event.o 00:04:16.896 CXX test/cpp_headers/fd_group.o 00:04:16.896 CXX test/cpp_headers/fd.o 00:04:16.896 CXX test/cpp_headers/file.o 00:04:17.155 LINK pmr_persistence 00:04:17.155 CXX test/cpp_headers/fsdev.o 00:04:17.155 LINK rpc_client_test 00:04:17.155 CXX test/cpp_headers/fsdev_module.o 00:04:17.155 CXX test/cpp_headers/ftl.o 00:04:17.155 CXX test/cpp_headers/fuse_dispatcher.o 00:04:17.155 CC test/accel/dif/dif.o 00:04:17.155 CXX test/cpp_headers/gpt_spec.o 00:04:17.155 CC test/blobfs/mkfs/mkfs.o 00:04:17.155 CXX test/cpp_headers/hexlify.o 00:04:17.155 CXX test/cpp_headers/histogram_data.o 00:04:17.155 CXX test/cpp_headers/idxd.o 00:04:17.155 LINK abort 00:04:17.415 LINK bdevperf 00:04:17.415 CC test/nvme/aer/aer.o 00:04:17.415 CXX test/cpp_headers/idxd_spec.o 00:04:17.415 CXX test/cpp_headers/init.o 00:04:17.415 CXX test/cpp_headers/ioat.o 00:04:17.415 LINK mkfs 00:04:17.415 CC test/lvol/esnap/esnap.o 00:04:17.415 CC test/nvme/reset/reset.o 00:04:17.415 CC test/nvme/sgl/sgl.o 00:04:17.416 CXX test/cpp_headers/ioat_spec.o 00:04:17.675 LINK aer 00:04:17.675 CC test/nvme/e2edp/nvme_dp.o 00:04:17.675 CC test/nvme/overhead/overhead.o 00:04:17.675 CXX test/cpp_headers/iscsi_spec.o 00:04:17.675 CC test/nvme/err_injection/err_injection.o 00:04:17.675 CC examples/nvmf/nvmf/nvmf.o 00:04:17.675 LINK reset 00:04:17.675 LINK sgl 00:04:17.935 CC test/nvme/startup/startup.o 00:04:17.935 CXX test/cpp_headers/json.o 00:04:17.935 LINK err_injection 00:04:17.935 LINK nvme_dp 00:04:17.935 LINK overhead 00:04:17.935 LINK dif 00:04:17.935 CC test/nvme/reserve/reserve.o 00:04:17.935 CXX test/cpp_headers/jsonrpc.o 00:04:17.935 CC test/nvme/simple_copy/simple_copy.o 00:04:17.935 LINK nvmf 00:04:17.935 LINK startup 00:04:17.935 CC test/nvme/connect_stress/connect_stress.o 00:04:18.195 CC test/nvme/boot_partition/boot_partition.o 00:04:18.195 CXX test/cpp_headers/keyring.o 00:04:18.195 CC test/nvme/compliance/nvme_compliance.o 00:04:18.195 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.195 LINK reserve 00:04:18.195 LINK simple_copy 00:04:18.195 LINK connect_stress 00:04:18.195 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:18.195 CXX test/cpp_headers/keyring_module.o 00:04:18.195 LINK boot_partition 00:04:18.195 CC test/bdev/bdevio/bdevio.o 00:04:18.454 LINK fused_ordering 00:04:18.454 CXX test/cpp_headers/likely.o 00:04:18.454 CXX test/cpp_headers/log.o 00:04:18.454 CC test/nvme/fdp/fdp.o 00:04:18.454 CC test/nvme/cuse/cuse.o 00:04:18.454 CXX test/cpp_headers/lvol.o 00:04:18.454 LINK doorbell_aers 00:04:18.454 LINK nvme_compliance 00:04:18.454 CXX test/cpp_headers/md5.o 00:04:18.454 CXX test/cpp_headers/memory.o 00:04:18.454 CXX test/cpp_headers/mmio.o 00:04:18.454 CXX test/cpp_headers/nbd.o 00:04:18.454 CXX test/cpp_headers/net.o 00:04:18.714 CXX test/cpp_headers/notify.o 00:04:18.714 CXX test/cpp_headers/nvme.o 00:04:18.714 LINK bdevio 00:04:18.714 CXX test/cpp_headers/nvme_intel.o 00:04:18.714 CXX test/cpp_headers/nvme_ocssd.o 00:04:18.714 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:18.714 CXX test/cpp_headers/nvme_spec.o 00:04:18.714 CXX test/cpp_headers/nvme_zns.o 00:04:18.714 LINK fdp 00:04:18.714 CXX test/cpp_headers/nvmf_cmd.o 00:04:18.714 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:18.714 CXX test/cpp_headers/nvmf.o 00:04:18.714 CXX test/cpp_headers/nvmf_spec.o 00:04:18.973 CXX test/cpp_headers/nvmf_transport.o 00:04:18.973 CXX test/cpp_headers/opal.o 00:04:18.973 CXX test/cpp_headers/opal_spec.o 00:04:18.973 CXX test/cpp_headers/pci_ids.o 00:04:18.973 CXX test/cpp_headers/pipe.o 00:04:18.973 CXX test/cpp_headers/queue.o 00:04:18.973 CXX test/cpp_headers/reduce.o 00:04:18.973 CXX test/cpp_headers/rpc.o 00:04:18.973 CXX test/cpp_headers/scheduler.o 00:04:18.973 CXX test/cpp_headers/scsi.o 00:04:18.973 CXX test/cpp_headers/scsi_spec.o 00:04:18.973 CXX test/cpp_headers/sock.o 00:04:18.973 CXX test/cpp_headers/stdinc.o 00:04:18.973 CXX test/cpp_headers/string.o 00:04:18.973 CXX test/cpp_headers/thread.o 00:04:19.232 CXX test/cpp_headers/trace.o 00:04:19.232 CXX test/cpp_headers/trace_parser.o 00:04:19.232 CXX test/cpp_headers/tree.o 00:04:19.232 CXX test/cpp_headers/ublk.o 00:04:19.232 CXX test/cpp_headers/util.o 00:04:19.232 CXX test/cpp_headers/uuid.o 00:04:19.232 CXX test/cpp_headers/version.o 00:04:19.232 CXX test/cpp_headers/vfio_user_pci.o 00:04:19.232 CXX test/cpp_headers/vfio_user_spec.o 00:04:19.232 CXX test/cpp_headers/vhost.o 00:04:19.232 CXX test/cpp_headers/vmd.o 00:04:19.232 CXX test/cpp_headers/xor.o 00:04:19.232 CXX test/cpp_headers/zipf.o 00:04:19.802 LINK cuse 00:04:23.094 LINK esnap 00:04:23.662 00:04:23.662 real 1m24.953s 00:04:23.662 user 7m22.055s 00:04:23.662 sys 1m35.064s 00:04:23.662 01:24:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:23.662 01:24:31 make -- common/autotest_common.sh@10 -- $ set +x 00:04:23.662 ************************************ 00:04:23.662 END TEST make 00:04:23.662 ************************************ 00:04:23.662 01:24:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:23.662 01:24:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:23.662 01:24:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:23.662 01:24:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.662 01:24:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:23.662 01:24:31 -- pm/common@44 -- $ pid=5457 00:04:23.662 01:24:31 -- pm/common@50 -- $ kill -TERM 5457 00:04:23.662 01:24:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.662 01:24:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:23.662 01:24:31 -- pm/common@44 -- $ pid=5459 00:04:23.662 01:24:31 -- pm/common@50 -- $ kill -TERM 5459 00:04:23.662 01:24:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:23.662 01:24:31 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:23.662 01:24:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.662 01:24:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.662 01:24:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.922 01:24:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.922 01:24:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.922 01:24:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.922 01:24:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.922 01:24:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.922 01:24:32 -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.922 01:24:32 -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.922 01:24:32 -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.922 01:24:32 -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.922 01:24:32 -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.922 01:24:32 -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.922 01:24:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.922 01:24:32 -- scripts/common.sh@344 -- # case "$op" in 00:04:23.922 01:24:32 -- scripts/common.sh@345 -- # : 1 00:04:23.922 01:24:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.922 01:24:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.922 01:24:32 -- scripts/common.sh@365 -- # decimal 1 00:04:23.922 01:24:32 -- scripts/common.sh@353 -- # local d=1 00:04:23.922 01:24:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.922 01:24:32 -- scripts/common.sh@355 -- # echo 1 00:04:23.922 01:24:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.922 01:24:32 -- scripts/common.sh@366 -- # decimal 2 00:04:23.922 01:24:32 -- scripts/common.sh@353 -- # local d=2 00:04:23.922 01:24:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.922 01:24:32 -- scripts/common.sh@355 -- # echo 2 00:04:23.922 01:24:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.922 01:24:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.922 01:24:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.922 01:24:32 -- scripts/common.sh@368 -- # return 0 00:04:23.922 01:24:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.922 01:24:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.922 --rc genhtml_branch_coverage=1 00:04:23.922 --rc genhtml_function_coverage=1 00:04:23.922 --rc genhtml_legend=1 00:04:23.922 --rc geninfo_all_blocks=1 00:04:23.922 --rc geninfo_unexecuted_blocks=1 00:04:23.922 00:04:23.922 ' 00:04:23.922 01:24:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.922 --rc genhtml_branch_coverage=1 00:04:23.922 --rc genhtml_function_coverage=1 00:04:23.922 --rc genhtml_legend=1 00:04:23.922 --rc geninfo_all_blocks=1 00:04:23.922 --rc geninfo_unexecuted_blocks=1 00:04:23.922 00:04:23.922 ' 00:04:23.922 01:24:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.922 --rc genhtml_branch_coverage=1 00:04:23.922 --rc genhtml_function_coverage=1 00:04:23.922 --rc genhtml_legend=1 00:04:23.922 --rc geninfo_all_blocks=1 00:04:23.922 --rc geninfo_unexecuted_blocks=1 00:04:23.922 00:04:23.922 ' 00:04:23.922 01:24:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.922 --rc genhtml_branch_coverage=1 00:04:23.922 --rc genhtml_function_coverage=1 00:04:23.922 --rc genhtml_legend=1 00:04:23.922 --rc geninfo_all_blocks=1 00:04:23.922 --rc geninfo_unexecuted_blocks=1 00:04:23.922 00:04:23.922 ' 00:04:23.922 01:24:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.922 01:24:32 -- nvmf/common.sh@7 -- # uname -s 00:04:23.922 01:24:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.922 01:24:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.922 01:24:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.922 01:24:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.922 01:24:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.922 01:24:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.922 01:24:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.922 01:24:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.922 01:24:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.922 01:24:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.922 01:24:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c7dc3818-0928-4352-9452-31669c8201e1 00:04:23.922 01:24:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=c7dc3818-0928-4352-9452-31669c8201e1 00:04:23.922 01:24:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.922 01:24:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.922 01:24:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.922 01:24:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.922 01:24:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.922 01:24:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.922 01:24:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.922 01:24:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.922 01:24:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.922 01:24:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.922 01:24:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.923 01:24:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.923 01:24:32 -- paths/export.sh@5 -- # export PATH 00:04:23.923 01:24:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.923 01:24:32 -- nvmf/common.sh@51 -- # : 0 00:04:23.923 01:24:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.923 01:24:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.923 01:24:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.923 01:24:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.923 01:24:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.923 01:24:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.923 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.923 01:24:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.923 01:24:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.923 01:24:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.923 01:24:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:23.923 01:24:32 -- spdk/autotest.sh@32 -- # uname -s 00:04:23.923 01:24:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:23.923 01:24:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:23.923 01:24:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.923 01:24:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:23.923 01:24:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:23.923 01:24:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:23.923 01:24:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:23.923 01:24:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:23.923 01:24:32 -- spdk/autotest.sh@48 -- # udevadm_pid=54421 00:04:23.923 01:24:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:23.923 01:24:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:23.923 01:24:32 -- pm/common@17 -- # local monitor 00:04:23.923 01:24:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.923 01:24:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:23.923 01:24:32 -- pm/common@21 -- # date +%s 00:04:23.923 01:24:32 -- pm/common@21 -- # date +%s 00:04:23.923 01:24:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731806672 00:04:23.923 01:24:32 -- pm/common@25 -- # sleep 1 00:04:23.923 01:24:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731806672 00:04:23.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731806672_collect-vmstat.pm.log 00:04:23.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731806672_collect-cpu-load.pm.log 00:04:24.859 01:24:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.859 01:24:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:24.859 01:24:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.859 01:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:24.859 01:24:33 -- spdk/autotest.sh@59 -- # create_test_list 00:04:24.859 01:24:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:24.859 01:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:25.118 01:24:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:25.118 01:24:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:25.118 01:24:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:25.118 01:24:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:25.118 01:24:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:25.118 01:24:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:25.118 01:24:33 -- common/autotest_common.sh@1457 -- # uname 00:04:25.118 01:24:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:25.118 01:24:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:25.118 01:24:33 -- common/autotest_common.sh@1477 -- # uname 00:04:25.118 01:24:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:25.118 01:24:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:25.118 01:24:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:25.118 lcov: LCOV version 1.15 00:04:25.118 01:24:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:40.034 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:40.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:54.944 01:25:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:54.945 01:25:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.945 01:25:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.945 01:25:02 -- spdk/autotest.sh@78 -- # rm -f 00:04:54.945 01:25:02 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.945 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:54.945 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:54.945 01:25:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:54.945 01:25:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:54.945 01:25:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:54.945 01:25:03 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:54.945 01:25:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.945 01:25:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:54.945 01:25:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:54.945 01:25:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.945 01:25:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.945 01:25:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.945 01:25:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:54.945 01:25:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:54.945 01:25:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:54.945 01:25:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.945 01:25:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.945 01:25:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:54.945 01:25:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:54.945 01:25:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:54.945 01:25:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.945 01:25:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:54.945 01:25:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:54.945 01:25:03 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:54.945 01:25:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:54.945 01:25:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:54.945 01:25:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:54.945 01:25:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.945 01:25:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.945 01:25:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:54.945 01:25:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:54.945 01:25:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:54.945 No valid GPT data, bailing 00:04:54.945 01:25:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.945 01:25:03 -- scripts/common.sh@394 -- # pt= 00:04:54.945 01:25:03 -- scripts/common.sh@395 -- # return 1 00:04:54.945 01:25:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:54.945 1+0 records in 00:04:54.945 1+0 records out 00:04:54.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647615 s, 162 MB/s 00:04:54.945 01:25:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.945 01:25:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.945 01:25:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:54.945 01:25:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:54.945 01:25:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:54.945 No valid GPT data, bailing 00:04:54.945 01:25:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:54.945 01:25:03 -- scripts/common.sh@394 -- # pt= 00:04:54.945 01:25:03 -- scripts/common.sh@395 -- # return 1 00:04:54.945 01:25:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:54.945 1+0 records in 00:04:54.945 1+0 records out 00:04:54.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677103 s, 155 MB/s 00:04:54.945 01:25:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.945 01:25:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.945 01:25:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:54.945 01:25:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:54.945 01:25:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:54.945 No valid GPT data, bailing 00:04:54.945 01:25:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:54.945 01:25:03 -- scripts/common.sh@394 -- # pt= 00:04:54.945 01:25:03 -- scripts/common.sh@395 -- # return 1 00:04:54.945 01:25:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:54.945 1+0 records in 00:04:54.945 1+0 records out 00:04:54.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00659384 s, 159 MB/s 00:04:54.945 01:25:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:54.945 01:25:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:54.945 01:25:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:54.945 01:25:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:54.945 01:25:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:55.205 No valid GPT data, bailing 00:04:55.205 01:25:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:55.205 01:25:03 -- scripts/common.sh@394 -- # pt= 00:04:55.205 01:25:03 -- scripts/common.sh@395 -- # return 1 00:04:55.205 01:25:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:55.205 1+0 records in 00:04:55.205 1+0 records out 00:04:55.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604554 s, 173 MB/s 00:04:55.205 01:25:03 -- spdk/autotest.sh@105 -- # sync 00:04:55.205 01:25:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:55.205 01:25:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:55.205 01:25:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:57.746 01:25:06 -- spdk/autotest.sh@111 -- # uname -s 00:04:57.746 01:25:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:57.746 01:25:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:57.746 01:25:06 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:58.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.686 Hugepages 00:04:58.686 node hugesize free / total 00:04:58.686 node0 1048576kB 0 / 0 00:04:58.686 node0 2048kB 0 / 0 00:04:58.686 00:04:58.686 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:58.686 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:58.945 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:58.946 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:58.946 01:25:07 -- spdk/autotest.sh@117 -- # uname -s 00:04:58.946 01:25:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:58.946 01:25:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:58.946 01:25:07 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.888 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.888 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.888 01:25:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:01.269 01:25:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:01.269 01:25:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:01.269 01:25:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.269 01:25:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:01.269 01:25:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:01.269 01:25:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:01.269 01:25:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.269 01:25:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:01.269 01:25:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:01.269 01:25:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:01.269 01:25:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:01.269 01:25:09 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.530 Waiting for block devices as requested 00:05:01.530 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:01.789 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:01.789 01:25:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:01.789 01:25:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:01.789 01:25:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:01.789 01:25:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:01.789 01:25:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:01.789 01:25:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1543 -- # continue 00:05:01.789 01:25:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:01.789 01:25:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:01.789 01:25:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:01.789 01:25:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:01.789 01:25:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:01.789 01:25:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:01.789 01:25:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:01.789 01:25:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:01.789 01:25:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:01.789 01:25:10 -- common/autotest_common.sh@1543 -- # continue 00:05:01.789 01:25:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:01.789 01:25:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.789 01:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:02.049 01:25:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:02.049 01:25:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.049 01:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:02.049 01:25:10 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.879 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.879 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.879 01:25:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:02.879 01:25:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.879 01:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:02.879 01:25:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:02.879 01:25:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:02.879 01:25:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.879 01:25:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:02.879 01:25:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:02.879 01:25:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:02.879 01:25:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:02.879 01:25:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:02.879 01:25:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:02.879 01:25:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:02.879 01:25:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.879 01:25:11 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:02.879 01:25:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:03.139 01:25:11 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:03.139 01:25:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:03.139 01:25:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:03.139 01:25:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:03.139 01:25:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:03.139 01:25:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.139 01:25:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:03.139 01:25:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:03.139 01:25:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:03.139 01:25:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.139 01:25:11 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:03.139 01:25:11 -- common/autotest_common.sh@1572 -- # return 0 00:05:03.139 01:25:11 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:03.139 01:25:11 -- common/autotest_common.sh@1580 -- # return 0 00:05:03.139 01:25:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:03.139 01:25:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:03.139 01:25:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:03.139 01:25:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:03.139 01:25:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:03.139 01:25:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.139 01:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.139 01:25:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:03.139 01:25:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.139 01:25:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.139 01:25:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.139 01:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:03.139 ************************************ 00:05:03.139 START TEST env 00:05:03.139 ************************************ 00:05:03.139 01:25:11 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:03.139 * Looking for test storage... 00:05:03.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:03.139 01:25:11 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.139 01:25:11 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.139 01:25:11 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.400 01:25:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.400 01:25:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.400 01:25:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.400 01:25:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.400 01:25:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.400 01:25:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.400 01:25:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.400 01:25:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.400 01:25:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.400 01:25:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.400 01:25:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.400 01:25:11 env -- scripts/common.sh@344 -- # case "$op" in 00:05:03.400 01:25:11 env -- scripts/common.sh@345 -- # : 1 00:05:03.400 01:25:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.400 01:25:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.400 01:25:11 env -- scripts/common.sh@365 -- # decimal 1 00:05:03.400 01:25:11 env -- scripts/common.sh@353 -- # local d=1 00:05:03.400 01:25:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.400 01:25:11 env -- scripts/common.sh@355 -- # echo 1 00:05:03.400 01:25:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.400 01:25:11 env -- scripts/common.sh@366 -- # decimal 2 00:05:03.400 01:25:11 env -- scripts/common.sh@353 -- # local d=2 00:05:03.400 01:25:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.400 01:25:11 env -- scripts/common.sh@355 -- # echo 2 00:05:03.400 01:25:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.400 01:25:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.400 01:25:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.400 01:25:11 env -- scripts/common.sh@368 -- # return 0 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.400 --rc genhtml_branch_coverage=1 00:05:03.400 --rc genhtml_function_coverage=1 00:05:03.400 --rc genhtml_legend=1 00:05:03.400 --rc geninfo_all_blocks=1 00:05:03.400 --rc geninfo_unexecuted_blocks=1 00:05:03.400 00:05:03.400 ' 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.400 --rc genhtml_branch_coverage=1 00:05:03.400 --rc genhtml_function_coverage=1 00:05:03.400 --rc genhtml_legend=1 00:05:03.400 --rc geninfo_all_blocks=1 00:05:03.400 --rc geninfo_unexecuted_blocks=1 00:05:03.400 00:05:03.400 ' 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.400 --rc genhtml_branch_coverage=1 00:05:03.400 --rc genhtml_function_coverage=1 00:05:03.400 --rc genhtml_legend=1 00:05:03.400 --rc geninfo_all_blocks=1 00:05:03.400 --rc geninfo_unexecuted_blocks=1 00:05:03.400 00:05:03.400 ' 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.400 --rc genhtml_branch_coverage=1 00:05:03.400 --rc genhtml_function_coverage=1 00:05:03.400 --rc genhtml_legend=1 00:05:03.400 --rc geninfo_all_blocks=1 00:05:03.400 --rc geninfo_unexecuted_blocks=1 00:05:03.400 00:05:03.400 ' 00:05:03.400 01:25:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.400 01:25:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.400 01:25:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.400 ************************************ 00:05:03.400 START TEST env_memory 00:05:03.400 ************************************ 00:05:03.400 01:25:11 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:03.400 00:05:03.400 00:05:03.400 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.400 http://cunit.sourceforge.net/ 00:05:03.400 00:05:03.400 00:05:03.400 Suite: memory 00:05:03.400 Test: alloc and free memory map ...[2024-11-17 01:25:11.723953] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.400 passed 00:05:03.400 Test: mem map translation ...[2024-11-17 01:25:11.765691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.400 [2024-11-17 01:25:11.765744] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.400 [2024-11-17 01:25:11.765808] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.400 [2024-11-17 01:25:11.765827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.400 passed 00:05:03.400 Test: mem map registration ...[2024-11-17 01:25:11.827521] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:03.400 [2024-11-17 01:25:11.827559] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:03.400 passed 00:05:03.661 Test: mem map adjacent registrations ...passed 00:05:03.661 00:05:03.661 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.661 suites 1 1 n/a 0 0 00:05:03.661 tests 4 4 4 0 0 00:05:03.661 asserts 152 152 152 0 n/a 00:05:03.661 00:05:03.661 Elapsed time = 0.222 seconds 00:05:03.661 00:05:03.661 real 0m0.267s 00:05:03.661 user 0m0.238s 00:05:03.661 sys 0m0.021s 00:05:03.661 01:25:11 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.661 01:25:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.661 ************************************ 00:05:03.661 END TEST env_memory 00:05:03.661 ************************************ 00:05:03.661 01:25:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.661 01:25:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.661 01:25:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.661 01:25:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.661 ************************************ 00:05:03.661 START TEST env_vtophys 00:05:03.661 ************************************ 00:05:03.661 01:25:11 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:03.661 EAL: lib.eal log level changed from notice to debug 00:05:03.661 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 1 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 2 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 3 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 4 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 5 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 6 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 7 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 8 as core 0 on socket 0 00:05:03.661 EAL: Detected lcore 9 as core 0 on socket 0 00:05:03.661 EAL: Maximum logical cores by configuration: 128 00:05:03.661 EAL: Detected CPU lcores: 10 00:05:03.661 EAL: Detected NUMA nodes: 1 00:05:03.661 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:03.661 EAL: Detected shared linkage of DPDK 00:05:03.661 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.661 EAL: Selected IOVA mode 'PA' 00:05:03.661 EAL: Probing VFIO support... 00:05:03.661 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.661 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:03.661 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.661 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.661 EAL: Setting up physically contiguous memory... 00:05:03.661 EAL: Setting maximum number of open files to 524288 00:05:03.661 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.661 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.661 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.661 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.661 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.661 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.661 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.661 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.661 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.661 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.661 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.661 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.661 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.661 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.661 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.661 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.661 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.661 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.661 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.661 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.661 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.661 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.661 EAL: Hugepages will be freed exactly as allocated. 00:05:03.661 EAL: No shared files mode enabled, IPC is disabled 00:05:03.661 EAL: No shared files mode enabled, IPC is disabled 00:05:03.922 EAL: TSC frequency is ~2290000 KHz 00:05:03.922 EAL: Main lcore 0 is ready (tid=7fdd09323a40;cpuset=[0]) 00:05:03.922 EAL: Trying to obtain current memory policy. 00:05:03.922 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.922 EAL: Restoring previous memory policy: 0 00:05:03.922 EAL: request: mp_malloc_sync 00:05:03.922 EAL: No shared files mode enabled, IPC is disabled 00:05:03.922 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.922 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.922 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.922 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.922 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:03.922 00:05:03.922 00:05:03.922 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.922 http://cunit.sourceforge.net/ 00:05:03.922 00:05:03.922 00:05:03.922 Suite: components_suite 00:05:04.182 Test: vtophys_malloc_test ...passed 00:05:04.182 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.182 EAL: Restoring previous memory policy: 4 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.182 EAL: Trying to obtain current memory policy. 00:05:04.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.182 EAL: Restoring previous memory policy: 4 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.182 EAL: Trying to obtain current memory policy. 00:05:04.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.182 EAL: Restoring previous memory policy: 4 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.182 EAL: Trying to obtain current memory policy. 00:05:04.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.182 EAL: Restoring previous memory policy: 4 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.182 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.182 EAL: request: mp_malloc_sync 00:05:04.182 EAL: No shared files mode enabled, IPC is disabled 00:05:04.182 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.443 EAL: Trying to obtain current memory policy. 00:05:04.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.443 EAL: Restoring previous memory policy: 4 00:05:04.443 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.443 EAL: request: mp_malloc_sync 00:05:04.443 EAL: No shared files mode enabled, IPC is disabled 00:05:04.443 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.443 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.443 EAL: request: mp_malloc_sync 00:05:04.443 EAL: No shared files mode enabled, IPC is disabled 00:05:04.443 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.443 EAL: Trying to obtain current memory policy. 00:05:04.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.443 EAL: Restoring previous memory policy: 4 00:05:04.443 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.443 EAL: request: mp_malloc_sync 00:05:04.443 EAL: No shared files mode enabled, IPC is disabled 00:05:04.443 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.443 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.443 EAL: request: mp_malloc_sync 00:05:04.443 EAL: No shared files mode enabled, IPC is disabled 00:05:04.443 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.703 EAL: Trying to obtain current memory policy. 00:05:04.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.703 EAL: Restoring previous memory policy: 4 00:05:04.703 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.703 EAL: request: mp_malloc_sync 00:05:04.703 EAL: No shared files mode enabled, IPC is disabled 00:05:04.703 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.962 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.962 EAL: request: mp_malloc_sync 00:05:04.962 EAL: No shared files mode enabled, IPC is disabled 00:05:04.962 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.222 EAL: Trying to obtain current memory policy. 00:05:05.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.222 EAL: Restoring previous memory policy: 4 00:05:05.222 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.222 EAL: request: mp_malloc_sync 00:05:05.222 EAL: No shared files mode enabled, IPC is disabled 00:05:05.222 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.483 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.483 EAL: request: mp_malloc_sync 00:05:05.483 EAL: No shared files mode enabled, IPC is disabled 00:05:05.483 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.052 EAL: Trying to obtain current memory policy. 00:05:06.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.052 EAL: Restoring previous memory policy: 4 00:05:06.052 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.052 EAL: request: mp_malloc_sync 00:05:06.052 EAL: No shared files mode enabled, IPC is disabled 00:05:06.052 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.992 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.992 EAL: request: mp_malloc_sync 00:05:06.992 EAL: No shared files mode enabled, IPC is disabled 00:05:06.992 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.932 EAL: Trying to obtain current memory policy. 00:05:07.932 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.932 EAL: Restoring previous memory policy: 4 00:05:07.932 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.932 EAL: request: mp_malloc_sync 00:05:07.932 EAL: No shared files mode enabled, IPC is disabled 00:05:07.932 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.842 EAL: request: mp_malloc_sync 00:05:09.842 EAL: No shared files mode enabled, IPC is disabled 00:05:09.842 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.751 passed 00:05:11.751 00:05:11.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.751 suites 1 1 n/a 0 0 00:05:11.751 tests 2 2 2 0 0 00:05:11.751 asserts 5761 5761 5761 0 n/a 00:05:11.751 00:05:11.751 Elapsed time = 7.562 seconds 00:05:11.751 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.751 EAL: request: mp_malloc_sync 00:05:11.751 EAL: No shared files mode enabled, IPC is disabled 00:05:11.751 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.751 EAL: No shared files mode enabled, IPC is disabled 00:05:11.751 EAL: No shared files mode enabled, IPC is disabled 00:05:11.751 EAL: No shared files mode enabled, IPC is disabled 00:05:11.751 00:05:11.751 real 0m7.862s 00:05:11.751 user 0m6.956s 00:05:11.751 sys 0m0.756s 00:05:11.751 01:25:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.751 01:25:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.751 ************************************ 00:05:11.751 END TEST env_vtophys 00:05:11.751 ************************************ 00:05:11.751 01:25:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:11.751 01:25:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.751 01:25:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.751 01:25:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.751 ************************************ 00:05:11.751 START TEST env_pci 00:05:11.751 ************************************ 00:05:11.751 01:25:19 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:11.751 00:05:11.751 00:05:11.751 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.751 http://cunit.sourceforge.net/ 00:05:11.751 00:05:11.751 00:05:11.751 Suite: pci 00:05:11.751 Test: pci_hook ...[2024-11-17 01:25:19.965629] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56710 has claimed it 00:05:11.751 passed 00:05:11.751 00:05:11.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.751 suites 1 1 n/a 0 0 00:05:11.751 tests 1 1 1 0 0 00:05:11.751 asserts 25 25 25 0 n/a 00:05:11.751 00:05:11.751 Elapsed time = 0.004 seconds 00:05:11.751 EAL: Cannot find device (10000:00:01.0) 00:05:11.751 EAL: Failed to attach device on primary process 00:05:11.751 00:05:11.751 real 0m0.093s 00:05:11.751 user 0m0.048s 00:05:11.751 sys 0m0.043s 00:05:11.751 01:25:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.751 01:25:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.751 ************************************ 00:05:11.751 END TEST env_pci 00:05:11.751 ************************************ 00:05:11.751 01:25:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.751 01:25:20 env -- env/env.sh@15 -- # uname 00:05:11.751 01:25:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.751 01:25:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.751 01:25:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.751 01:25:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:11.751 01:25:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.751 01:25:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.751 ************************************ 00:05:11.751 START TEST env_dpdk_post_init 00:05:11.751 ************************************ 00:05:11.751 01:25:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.751 EAL: Detected CPU lcores: 10 00:05:11.751 EAL: Detected NUMA nodes: 1 00:05:11.751 EAL: Detected shared linkage of DPDK 00:05:11.751 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.751 EAL: Selected IOVA mode 'PA' 00:05:12.010 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.010 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.010 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.010 Starting DPDK initialization... 00:05:12.010 Starting SPDK post initialization... 00:05:12.010 SPDK NVMe probe 00:05:12.011 Attaching to 0000:00:10.0 00:05:12.011 Attaching to 0000:00:11.0 00:05:12.011 Attached to 0000:00:10.0 00:05:12.011 Attached to 0000:00:11.0 00:05:12.011 Cleaning up... 00:05:12.011 00:05:12.011 real 0m0.277s 00:05:12.011 user 0m0.072s 00:05:12.011 sys 0m0.105s 00:05:12.011 01:25:20 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.011 01:25:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.011 ************************************ 00:05:12.011 END TEST env_dpdk_post_init 00:05:12.011 ************************************ 00:05:12.011 01:25:20 env -- env/env.sh@26 -- # uname 00:05:12.011 01:25:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.011 01:25:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.011 01:25:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.011 01:25:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.011 01:25:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.011 ************************************ 00:05:12.011 START TEST env_mem_callbacks 00:05:12.011 ************************************ 00:05:12.011 01:25:20 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.270 EAL: Detected CPU lcores: 10 00:05:12.270 EAL: Detected NUMA nodes: 1 00:05:12.270 EAL: Detected shared linkage of DPDK 00:05:12.270 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.270 EAL: Selected IOVA mode 'PA' 00:05:12.270 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.270 00:05:12.270 00:05:12.270 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.270 http://cunit.sourceforge.net/ 00:05:12.270 00:05:12.270 00:05:12.270 Suite: memory 00:05:12.270 Test: test ... 00:05:12.270 register 0x200000200000 2097152 00:05:12.270 malloc 3145728 00:05:12.270 register 0x200000400000 4194304 00:05:12.270 buf 0x2000004fffc0 len 3145728 PASSED 00:05:12.270 malloc 64 00:05:12.270 buf 0x2000004ffec0 len 64 PASSED 00:05:12.270 malloc 4194304 00:05:12.270 register 0x200000800000 6291456 00:05:12.270 buf 0x2000009fffc0 len 4194304 PASSED 00:05:12.270 free 0x2000004fffc0 3145728 00:05:12.270 free 0x2000004ffec0 64 00:05:12.270 unregister 0x200000400000 4194304 PASSED 00:05:12.270 free 0x2000009fffc0 4194304 00:05:12.270 unregister 0x200000800000 6291456 PASSED 00:05:12.270 malloc 8388608 00:05:12.270 register 0x200000400000 10485760 00:05:12.270 buf 0x2000005fffc0 len 8388608 PASSED 00:05:12.270 free 0x2000005fffc0 8388608 00:05:12.270 unregister 0x200000400000 10485760 PASSED 00:05:12.270 passed 00:05:12.270 00:05:12.270 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.270 suites 1 1 n/a 0 0 00:05:12.270 tests 1 1 1 0 0 00:05:12.270 asserts 15 15 15 0 n/a 00:05:12.270 00:05:12.270 Elapsed time = 0.080 seconds 00:05:12.270 00:05:12.270 real 0m0.274s 00:05:12.270 user 0m0.105s 00:05:12.270 sys 0m0.067s 00:05:12.270 01:25:20 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.270 01:25:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:12.270 ************************************ 00:05:12.270 END TEST env_mem_callbacks 00:05:12.270 ************************************ 00:05:12.531 00:05:12.531 real 0m9.336s 00:05:12.531 user 0m7.663s 00:05:12.531 sys 0m1.326s 00:05:12.531 01:25:20 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.531 01:25:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.531 ************************************ 00:05:12.531 END TEST env 00:05:12.531 ************************************ 00:05:12.531 01:25:20 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:12.531 01:25:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.531 01:25:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.531 01:25:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.531 ************************************ 00:05:12.531 START TEST rpc 00:05:12.531 ************************************ 00:05:12.531 01:25:20 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:12.531 * Looking for test storage... 00:05:12.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.531 01:25:20 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.531 01:25:20 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.531 01:25:20 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.791 01:25:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.791 01:25:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.791 01:25:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.791 01:25:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.791 01:25:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.791 01:25:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.791 01:25:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.791 01:25:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.791 01:25:21 rpc -- scripts/common.sh@345 -- # : 1 00:05:12.791 01:25:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.791 01:25:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.791 01:25:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.791 01:25:21 rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.791 01:25:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.791 01:25:21 rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.791 01:25:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.791 01:25:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.791 01:25:21 rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.791 01:25:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.791 01:25:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.791 01:25:21 rpc -- scripts/common.sh@368 -- # return 0 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.791 --rc genhtml_branch_coverage=1 00:05:12.791 --rc genhtml_function_coverage=1 00:05:12.791 --rc genhtml_legend=1 00:05:12.791 --rc geninfo_all_blocks=1 00:05:12.791 --rc geninfo_unexecuted_blocks=1 00:05:12.791 00:05:12.791 ' 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.791 --rc genhtml_branch_coverage=1 00:05:12.791 --rc genhtml_function_coverage=1 00:05:12.791 --rc genhtml_legend=1 00:05:12.791 --rc geninfo_all_blocks=1 00:05:12.791 --rc geninfo_unexecuted_blocks=1 00:05:12.791 00:05:12.791 ' 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.791 --rc genhtml_branch_coverage=1 00:05:12.791 --rc genhtml_function_coverage=1 00:05:12.791 --rc genhtml_legend=1 00:05:12.791 --rc geninfo_all_blocks=1 00:05:12.791 --rc geninfo_unexecuted_blocks=1 00:05:12.791 00:05:12.791 ' 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.791 --rc genhtml_branch_coverage=1 00:05:12.791 --rc genhtml_function_coverage=1 00:05:12.791 --rc genhtml_legend=1 00:05:12.791 --rc geninfo_all_blocks=1 00:05:12.791 --rc geninfo_unexecuted_blocks=1 00:05:12.791 00:05:12.791 ' 00:05:12.791 01:25:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56837 00:05:12.791 01:25:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.791 01:25:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56837 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@835 -- # '[' -z 56837 ']' 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.791 01:25:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.791 01:25:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:12.792 [2024-11-17 01:25:21.147903] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:12.792 [2024-11-17 01:25:21.148023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56837 ] 00:05:13.051 [2024-11-17 01:25:21.320102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.051 [2024-11-17 01:25:21.424334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.051 [2024-11-17 01:25:21.424397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56837' to capture a snapshot of events at runtime. 00:05:13.051 [2024-11-17 01:25:21.424406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.051 [2024-11-17 01:25:21.424431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.051 [2024-11-17 01:25:21.424438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56837 for offline analysis/debug. 00:05:13.051 [2024-11-17 01:25:21.425794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.992 01:25:22 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.992 01:25:22 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.992 01:25:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.992 01:25:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.992 01:25:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.992 01:25:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.992 01:25:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.992 01:25:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.992 01:25:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.992 ************************************ 00:05:13.992 START TEST rpc_integrity 00:05:13.992 ************************************ 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.992 { 00:05:13.992 "name": "Malloc0", 00:05:13.992 "aliases": [ 00:05:13.992 "b4463fcc-3dc0-4f99-8259-f16439741adf" 00:05:13.992 ], 00:05:13.992 "product_name": "Malloc disk", 00:05:13.992 "block_size": 512, 00:05:13.992 "num_blocks": 16384, 00:05:13.992 "uuid": "b4463fcc-3dc0-4f99-8259-f16439741adf", 00:05:13.992 "assigned_rate_limits": { 00:05:13.992 "rw_ios_per_sec": 0, 00:05:13.992 "rw_mbytes_per_sec": 0, 00:05:13.992 "r_mbytes_per_sec": 0, 00:05:13.992 "w_mbytes_per_sec": 0 00:05:13.992 }, 00:05:13.992 "claimed": false, 00:05:13.992 "zoned": false, 00:05:13.992 "supported_io_types": { 00:05:13.992 "read": true, 00:05:13.992 "write": true, 00:05:13.992 "unmap": true, 00:05:13.992 "flush": true, 00:05:13.992 "reset": true, 00:05:13.992 "nvme_admin": false, 00:05:13.992 "nvme_io": false, 00:05:13.992 "nvme_io_md": false, 00:05:13.992 "write_zeroes": true, 00:05:13.992 "zcopy": true, 00:05:13.992 "get_zone_info": false, 00:05:13.992 "zone_management": false, 00:05:13.992 "zone_append": false, 00:05:13.992 "compare": false, 00:05:13.992 "compare_and_write": false, 00:05:13.992 "abort": true, 00:05:13.992 "seek_hole": false, 00:05:13.992 "seek_data": false, 00:05:13.992 "copy": true, 00:05:13.992 "nvme_iov_md": false 00:05:13.992 }, 00:05:13.992 "memory_domains": [ 00:05:13.992 { 00:05:13.992 "dma_device_id": "system", 00:05:13.992 "dma_device_type": 1 00:05:13.992 }, 00:05:13.992 { 00:05:13.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.992 "dma_device_type": 2 00:05:13.992 } 00:05:13.992 ], 00:05:13.992 "driver_specific": {} 00:05:13.992 } 00:05:13.992 ]' 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.992 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.992 [2024-11-17 01:25:22.423559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.992 [2024-11-17 01:25:22.423619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.992 [2024-11-17 01:25:22.423640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:13.992 [2024-11-17 01:25:22.423654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.992 [2024-11-17 01:25:22.425811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.992 [2024-11-17 01:25:22.425850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.992 Passthru0 00:05:13.992 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.993 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.993 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.993 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.253 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.253 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.253 { 00:05:14.253 "name": "Malloc0", 00:05:14.253 "aliases": [ 00:05:14.253 "b4463fcc-3dc0-4f99-8259-f16439741adf" 00:05:14.253 ], 00:05:14.253 "product_name": "Malloc disk", 00:05:14.253 "block_size": 512, 00:05:14.253 "num_blocks": 16384, 00:05:14.253 "uuid": "b4463fcc-3dc0-4f99-8259-f16439741adf", 00:05:14.253 "assigned_rate_limits": { 00:05:14.253 "rw_ios_per_sec": 0, 00:05:14.253 "rw_mbytes_per_sec": 0, 00:05:14.253 "r_mbytes_per_sec": 0, 00:05:14.253 "w_mbytes_per_sec": 0 00:05:14.253 }, 00:05:14.253 "claimed": true, 00:05:14.253 "claim_type": "exclusive_write", 00:05:14.253 "zoned": false, 00:05:14.253 "supported_io_types": { 00:05:14.253 "read": true, 00:05:14.253 "write": true, 00:05:14.253 "unmap": true, 00:05:14.253 "flush": true, 00:05:14.253 "reset": true, 00:05:14.253 "nvme_admin": false, 00:05:14.253 "nvme_io": false, 00:05:14.253 "nvme_io_md": false, 00:05:14.253 "write_zeroes": true, 00:05:14.253 "zcopy": true, 00:05:14.253 "get_zone_info": false, 00:05:14.253 "zone_management": false, 00:05:14.253 "zone_append": false, 00:05:14.253 "compare": false, 00:05:14.253 "compare_and_write": false, 00:05:14.253 "abort": true, 00:05:14.253 "seek_hole": false, 00:05:14.253 "seek_data": false, 00:05:14.253 "copy": true, 00:05:14.253 "nvme_iov_md": false 00:05:14.253 }, 00:05:14.253 "memory_domains": [ 00:05:14.253 { 00:05:14.253 "dma_device_id": "system", 00:05:14.253 "dma_device_type": 1 00:05:14.253 }, 00:05:14.253 { 00:05:14.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.253 "dma_device_type": 2 00:05:14.253 } 00:05:14.253 ], 00:05:14.253 "driver_specific": {} 00:05:14.253 }, 00:05:14.253 { 00:05:14.253 "name": "Passthru0", 00:05:14.253 "aliases": [ 00:05:14.253 "e916670e-105a-5511-9e83-ec765bd9b317" 00:05:14.253 ], 00:05:14.253 "product_name": "passthru", 00:05:14.253 "block_size": 512, 00:05:14.253 "num_blocks": 16384, 00:05:14.253 "uuid": "e916670e-105a-5511-9e83-ec765bd9b317", 00:05:14.253 "assigned_rate_limits": { 00:05:14.253 "rw_ios_per_sec": 0, 00:05:14.253 "rw_mbytes_per_sec": 0, 00:05:14.253 "r_mbytes_per_sec": 0, 00:05:14.253 "w_mbytes_per_sec": 0 00:05:14.253 }, 00:05:14.253 "claimed": false, 00:05:14.253 "zoned": false, 00:05:14.253 "supported_io_types": { 00:05:14.253 "read": true, 00:05:14.253 "write": true, 00:05:14.253 "unmap": true, 00:05:14.253 "flush": true, 00:05:14.253 "reset": true, 00:05:14.253 "nvme_admin": false, 00:05:14.253 "nvme_io": false, 00:05:14.253 "nvme_io_md": false, 00:05:14.253 "write_zeroes": true, 00:05:14.253 "zcopy": true, 00:05:14.253 "get_zone_info": false, 00:05:14.253 "zone_management": false, 00:05:14.253 "zone_append": false, 00:05:14.253 "compare": false, 00:05:14.253 "compare_and_write": false, 00:05:14.253 "abort": true, 00:05:14.253 "seek_hole": false, 00:05:14.254 "seek_data": false, 00:05:14.254 "copy": true, 00:05:14.254 "nvme_iov_md": false 00:05:14.254 }, 00:05:14.254 "memory_domains": [ 00:05:14.254 { 00:05:14.254 "dma_device_id": "system", 00:05:14.254 "dma_device_type": 1 00:05:14.254 }, 00:05:14.254 { 00:05:14.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.254 "dma_device_type": 2 00:05:14.254 } 00:05:14.254 ], 00:05:14.254 "driver_specific": { 00:05:14.254 "passthru": { 00:05:14.254 "name": "Passthru0", 00:05:14.254 "base_bdev_name": "Malloc0" 00:05:14.254 } 00:05:14.254 } 00:05:14.254 } 00:05:14.254 ]' 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.254 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.254 00:05:14.254 real 0m0.338s 00:05:14.254 user 0m0.175s 00:05:14.254 sys 0m0.053s 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.254 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 ************************************ 00:05:14.254 END TEST rpc_integrity 00:05:14.254 ************************************ 00:05:14.254 01:25:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.254 01:25:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.254 01:25:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.254 01:25:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 ************************************ 00:05:14.254 START TEST rpc_plugins 00:05:14.254 ************************************ 00:05:14.254 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:14.254 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.254 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.254 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.254 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.254 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.254 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.254 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.254 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:14.254 { 00:05:14.254 "name": "Malloc1", 00:05:14.254 "aliases": [ 00:05:14.254 "558aba6a-7525-461d-8237-b805ba0fd4aa" 00:05:14.254 ], 00:05:14.254 "product_name": "Malloc disk", 00:05:14.254 "block_size": 4096, 00:05:14.254 "num_blocks": 256, 00:05:14.254 "uuid": "558aba6a-7525-461d-8237-b805ba0fd4aa", 00:05:14.254 "assigned_rate_limits": { 00:05:14.254 "rw_ios_per_sec": 0, 00:05:14.254 "rw_mbytes_per_sec": 0, 00:05:14.254 "r_mbytes_per_sec": 0, 00:05:14.254 "w_mbytes_per_sec": 0 00:05:14.254 }, 00:05:14.254 "claimed": false, 00:05:14.254 "zoned": false, 00:05:14.254 "supported_io_types": { 00:05:14.254 "read": true, 00:05:14.254 "write": true, 00:05:14.254 "unmap": true, 00:05:14.254 "flush": true, 00:05:14.254 "reset": true, 00:05:14.254 "nvme_admin": false, 00:05:14.254 "nvme_io": false, 00:05:14.254 "nvme_io_md": false, 00:05:14.254 "write_zeroes": true, 00:05:14.254 "zcopy": true, 00:05:14.254 "get_zone_info": false, 00:05:14.254 "zone_management": false, 00:05:14.254 "zone_append": false, 00:05:14.254 "compare": false, 00:05:14.254 "compare_and_write": false, 00:05:14.254 "abort": true, 00:05:14.254 "seek_hole": false, 00:05:14.254 "seek_data": false, 00:05:14.254 "copy": true, 00:05:14.254 "nvme_iov_md": false 00:05:14.254 }, 00:05:14.254 "memory_domains": [ 00:05:14.254 { 00:05:14.254 "dma_device_id": "system", 00:05:14.254 "dma_device_type": 1 00:05:14.254 }, 00:05:14.254 { 00:05:14.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.254 "dma_device_type": 2 00:05:14.254 } 00:05:14.254 ], 00:05:14.254 "driver_specific": {} 00:05:14.254 } 00:05:14.254 ]' 00:05:14.254 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:14.515 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:14.515 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.515 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.515 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:14.515 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:14.515 01:25:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:14.515 00:05:14.515 real 0m0.168s 00:05:14.515 user 0m0.093s 00:05:14.515 sys 0m0.026s 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.515 01:25:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 ************************************ 00:05:14.515 END TEST rpc_plugins 00:05:14.515 ************************************ 00:05:14.515 01:25:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:14.515 01:25:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.515 01:25:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.515 01:25:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 ************************************ 00:05:14.515 START TEST rpc_trace_cmd_test 00:05:14.515 ************************************ 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:14.515 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56837", 00:05:14.515 "tpoint_group_mask": "0x8", 00:05:14.515 "iscsi_conn": { 00:05:14.515 "mask": "0x2", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "scsi": { 00:05:14.515 "mask": "0x4", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "bdev": { 00:05:14.515 "mask": "0x8", 00:05:14.515 "tpoint_mask": "0xffffffffffffffff" 00:05:14.515 }, 00:05:14.515 "nvmf_rdma": { 00:05:14.515 "mask": "0x10", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "nvmf_tcp": { 00:05:14.515 "mask": "0x20", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "ftl": { 00:05:14.515 "mask": "0x40", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "blobfs": { 00:05:14.515 "mask": "0x80", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "dsa": { 00:05:14.515 "mask": "0x200", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "thread": { 00:05:14.515 "mask": "0x400", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "nvme_pcie": { 00:05:14.515 "mask": "0x800", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "iaa": { 00:05:14.515 "mask": "0x1000", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "nvme_tcp": { 00:05:14.515 "mask": "0x2000", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "bdev_nvme": { 00:05:14.515 "mask": "0x4000", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "sock": { 00:05:14.515 "mask": "0x8000", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "blob": { 00:05:14.515 "mask": "0x10000", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "bdev_raid": { 00:05:14.515 "mask": "0x20000", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 }, 00:05:14.515 "scheduler": { 00:05:14.515 "mask": "0x40000", 00:05:14.515 "tpoint_mask": "0x0" 00:05:14.515 } 00:05:14.515 }' 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:14.515 01:25:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:14.776 01:25:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:14.776 00:05:14.776 real 0m0.236s 00:05:14.776 user 0m0.200s 00:05:14.776 sys 0m0.027s 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.776 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.776 ************************************ 00:05:14.776 END TEST rpc_trace_cmd_test 00:05:14.776 ************************************ 00:05:14.776 01:25:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:14.776 01:25:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.776 01:25:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.776 01:25:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.776 01:25:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.776 01:25:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.776 ************************************ 00:05:14.776 START TEST rpc_daemon_integrity 00:05:14.776 ************************************ 00:05:14.776 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:14.776 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.776 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.776 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.776 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.776 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.776 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.038 { 00:05:15.038 "name": "Malloc2", 00:05:15.038 "aliases": [ 00:05:15.038 "be350e3d-819e-48cf-9b01-68c83ca67672" 00:05:15.038 ], 00:05:15.038 "product_name": "Malloc disk", 00:05:15.038 "block_size": 512, 00:05:15.038 "num_blocks": 16384, 00:05:15.038 "uuid": "be350e3d-819e-48cf-9b01-68c83ca67672", 00:05:15.038 "assigned_rate_limits": { 00:05:15.038 "rw_ios_per_sec": 0, 00:05:15.038 "rw_mbytes_per_sec": 0, 00:05:15.038 "r_mbytes_per_sec": 0, 00:05:15.038 "w_mbytes_per_sec": 0 00:05:15.038 }, 00:05:15.038 "claimed": false, 00:05:15.038 "zoned": false, 00:05:15.038 "supported_io_types": { 00:05:15.038 "read": true, 00:05:15.038 "write": true, 00:05:15.038 "unmap": true, 00:05:15.038 "flush": true, 00:05:15.038 "reset": true, 00:05:15.038 "nvme_admin": false, 00:05:15.038 "nvme_io": false, 00:05:15.038 "nvme_io_md": false, 00:05:15.038 "write_zeroes": true, 00:05:15.038 "zcopy": true, 00:05:15.038 "get_zone_info": false, 00:05:15.038 "zone_management": false, 00:05:15.038 "zone_append": false, 00:05:15.038 "compare": false, 00:05:15.038 "compare_and_write": false, 00:05:15.038 "abort": true, 00:05:15.038 "seek_hole": false, 00:05:15.038 "seek_data": false, 00:05:15.038 "copy": true, 00:05:15.038 "nvme_iov_md": false 00:05:15.038 }, 00:05:15.038 "memory_domains": [ 00:05:15.038 { 00:05:15.038 "dma_device_id": "system", 00:05:15.038 "dma_device_type": 1 00:05:15.038 }, 00:05:15.038 { 00:05:15.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.038 "dma_device_type": 2 00:05:15.038 } 00:05:15.038 ], 00:05:15.038 "driver_specific": {} 00:05:15.038 } 00:05:15.038 ]' 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.038 [2024-11-17 01:25:23.337238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.038 [2024-11-17 01:25:23.337311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.038 [2024-11-17 01:25:23.337330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:15.038 [2024-11-17 01:25:23.337340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.038 [2024-11-17 01:25:23.339452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.038 [2024-11-17 01:25:23.339495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.038 Passthru0 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.038 { 00:05:15.038 "name": "Malloc2", 00:05:15.038 "aliases": [ 00:05:15.038 "be350e3d-819e-48cf-9b01-68c83ca67672" 00:05:15.038 ], 00:05:15.038 "product_name": "Malloc disk", 00:05:15.038 "block_size": 512, 00:05:15.038 "num_blocks": 16384, 00:05:15.038 "uuid": "be350e3d-819e-48cf-9b01-68c83ca67672", 00:05:15.038 "assigned_rate_limits": { 00:05:15.038 "rw_ios_per_sec": 0, 00:05:15.038 "rw_mbytes_per_sec": 0, 00:05:15.038 "r_mbytes_per_sec": 0, 00:05:15.038 "w_mbytes_per_sec": 0 00:05:15.038 }, 00:05:15.038 "claimed": true, 00:05:15.038 "claim_type": "exclusive_write", 00:05:15.038 "zoned": false, 00:05:15.038 "supported_io_types": { 00:05:15.038 "read": true, 00:05:15.038 "write": true, 00:05:15.038 "unmap": true, 00:05:15.038 "flush": true, 00:05:15.038 "reset": true, 00:05:15.038 "nvme_admin": false, 00:05:15.038 "nvme_io": false, 00:05:15.038 "nvme_io_md": false, 00:05:15.038 "write_zeroes": true, 00:05:15.038 "zcopy": true, 00:05:15.038 "get_zone_info": false, 00:05:15.038 "zone_management": false, 00:05:15.038 "zone_append": false, 00:05:15.038 "compare": false, 00:05:15.038 "compare_and_write": false, 00:05:15.038 "abort": true, 00:05:15.038 "seek_hole": false, 00:05:15.038 "seek_data": false, 00:05:15.038 "copy": true, 00:05:15.038 "nvme_iov_md": false 00:05:15.038 }, 00:05:15.038 "memory_domains": [ 00:05:15.038 { 00:05:15.038 "dma_device_id": "system", 00:05:15.038 "dma_device_type": 1 00:05:15.038 }, 00:05:15.038 { 00:05:15.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.038 "dma_device_type": 2 00:05:15.038 } 00:05:15.038 ], 00:05:15.038 "driver_specific": {} 00:05:15.038 }, 00:05:15.038 { 00:05:15.038 "name": "Passthru0", 00:05:15.038 "aliases": [ 00:05:15.038 "6a1a3edb-9e8c-5724-8cf8-e5ffc41235e6" 00:05:15.038 ], 00:05:15.038 "product_name": "passthru", 00:05:15.038 "block_size": 512, 00:05:15.038 "num_blocks": 16384, 00:05:15.038 "uuid": "6a1a3edb-9e8c-5724-8cf8-e5ffc41235e6", 00:05:15.038 "assigned_rate_limits": { 00:05:15.038 "rw_ios_per_sec": 0, 00:05:15.038 "rw_mbytes_per_sec": 0, 00:05:15.038 "r_mbytes_per_sec": 0, 00:05:15.038 "w_mbytes_per_sec": 0 00:05:15.038 }, 00:05:15.038 "claimed": false, 00:05:15.038 "zoned": false, 00:05:15.038 "supported_io_types": { 00:05:15.038 "read": true, 00:05:15.038 "write": true, 00:05:15.038 "unmap": true, 00:05:15.038 "flush": true, 00:05:15.038 "reset": true, 00:05:15.038 "nvme_admin": false, 00:05:15.038 "nvme_io": false, 00:05:15.038 "nvme_io_md": false, 00:05:15.038 "write_zeroes": true, 00:05:15.038 "zcopy": true, 00:05:15.038 "get_zone_info": false, 00:05:15.038 "zone_management": false, 00:05:15.038 "zone_append": false, 00:05:15.038 "compare": false, 00:05:15.038 "compare_and_write": false, 00:05:15.038 "abort": true, 00:05:15.038 "seek_hole": false, 00:05:15.038 "seek_data": false, 00:05:15.038 "copy": true, 00:05:15.038 "nvme_iov_md": false 00:05:15.038 }, 00:05:15.038 "memory_domains": [ 00:05:15.038 { 00:05:15.038 "dma_device_id": "system", 00:05:15.038 "dma_device_type": 1 00:05:15.038 }, 00:05:15.038 { 00:05:15.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.038 "dma_device_type": 2 00:05:15.038 } 00:05:15.038 ], 00:05:15.038 "driver_specific": { 00:05:15.038 "passthru": { 00:05:15.038 "name": "Passthru0", 00:05:15.038 "base_bdev_name": "Malloc2" 00:05:15.038 } 00:05:15.038 } 00:05:15.038 } 00:05:15.038 ]' 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.038 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.039 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.310 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.310 00:05:15.310 real 0m0.338s 00:05:15.310 user 0m0.184s 00:05:15.310 sys 0m0.055s 00:05:15.310 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.310 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.310 ************************************ 00:05:15.310 END TEST rpc_daemon_integrity 00:05:15.310 ************************************ 00:05:15.310 01:25:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.310 01:25:23 rpc -- rpc/rpc.sh@84 -- # killprocess 56837 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@954 -- # '[' -z 56837 ']' 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@958 -- # kill -0 56837 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56837 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.310 killing process with pid 56837 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56837' 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@973 -- # kill 56837 00:05:15.310 01:25:23 rpc -- common/autotest_common.sh@978 -- # wait 56837 00:05:17.868 00:05:17.868 real 0m4.993s 00:05:17.868 user 0m5.491s 00:05:17.868 sys 0m0.890s 00:05:17.868 01:25:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.868 01:25:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.868 ************************************ 00:05:17.868 END TEST rpc 00:05:17.868 ************************************ 00:05:17.868 01:25:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:17.868 01:25:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.868 01:25:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.868 01:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:17.868 ************************************ 00:05:17.868 START TEST skip_rpc 00:05:17.868 ************************************ 00:05:17.868 01:25:25 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:17.868 * Looking for test storage... 00:05:17.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.868 01:25:26 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.868 01:25:26 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.868 01:25:26 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.868 01:25:26 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.868 01:25:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:17.868 01:25:26 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.868 01:25:26 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.868 --rc genhtml_branch_coverage=1 00:05:17.868 --rc genhtml_function_coverage=1 00:05:17.868 --rc genhtml_legend=1 00:05:17.868 --rc geninfo_all_blocks=1 00:05:17.868 --rc geninfo_unexecuted_blocks=1 00:05:17.868 00:05:17.868 ' 00:05:17.868 01:25:26 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.868 --rc genhtml_branch_coverage=1 00:05:17.868 --rc genhtml_function_coverage=1 00:05:17.869 --rc genhtml_legend=1 00:05:17.869 --rc geninfo_all_blocks=1 00:05:17.869 --rc geninfo_unexecuted_blocks=1 00:05:17.869 00:05:17.869 ' 00:05:17.869 01:25:26 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.869 --rc genhtml_branch_coverage=1 00:05:17.869 --rc genhtml_function_coverage=1 00:05:17.869 --rc genhtml_legend=1 00:05:17.869 --rc geninfo_all_blocks=1 00:05:17.869 --rc geninfo_unexecuted_blocks=1 00:05:17.869 00:05:17.869 ' 00:05:17.869 01:25:26 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.869 --rc genhtml_branch_coverage=1 00:05:17.869 --rc genhtml_function_coverage=1 00:05:17.869 --rc genhtml_legend=1 00:05:17.869 --rc geninfo_all_blocks=1 00:05:17.869 --rc geninfo_unexecuted_blocks=1 00:05:17.869 00:05:17.869 ' 00:05:17.869 01:25:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.869 01:25:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.869 01:25:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.869 01:25:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.869 01:25:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.869 01:25:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.869 ************************************ 00:05:17.869 START TEST skip_rpc 00:05:17.869 ************************************ 00:05:17.869 01:25:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:17.869 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57066 00:05:17.869 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:17.869 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.869 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:17.869 [2024-11-17 01:25:26.203292] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:17.869 [2024-11-17 01:25:26.203417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57066 ] 00:05:18.129 [2024-11-17 01:25:26.374124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.129 [2024-11-17 01:25:26.486540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.412 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57066 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57066 ']' 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57066 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57066 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.413 killing process with pid 57066 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57066' 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57066 00:05:23.413 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57066 00:05:25.320 00:05:25.320 real 0m7.313s 00:05:25.320 user 0m6.860s 00:05:25.320 sys 0m0.374s 00:05:25.320 01:25:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.320 01:25:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.320 ************************************ 00:05:25.320 END TEST skip_rpc 00:05:25.320 ************************************ 00:05:25.320 01:25:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.320 01:25:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.320 01:25:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.320 01:25:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.320 ************************************ 00:05:25.320 START TEST skip_rpc_with_json 00:05:25.320 ************************************ 00:05:25.320 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:25.320 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.320 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57170 00:05:25.320 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.320 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.320 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57170 00:05:25.320 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57170 ']' 00:05:25.321 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.321 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.321 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.321 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.321 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.321 [2024-11-17 01:25:33.581176] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:25.321 [2024-11-17 01:25:33.581308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57170 ] 00:05:25.321 [2024-11-17 01:25:33.733289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.625 [2024-11-17 01:25:33.843846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 [2024-11-17 01:25:34.682446] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:26.566 request: 00:05:26.566 { 00:05:26.566 "trtype": "tcp", 00:05:26.566 "method": "nvmf_get_transports", 00:05:26.566 "req_id": 1 00:05:26.566 } 00:05:26.566 Got JSON-RPC error response 00:05:26.566 response: 00:05:26.566 { 00:05:26.566 "code": -19, 00:05:26.566 "message": "No such device" 00:05:26.566 } 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 [2024-11-17 01:25:34.694538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.566 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.566 { 00:05:26.566 "subsystems": [ 00:05:26.566 { 00:05:26.566 "subsystem": "fsdev", 00:05:26.566 "config": [ 00:05:26.566 { 00:05:26.566 "method": "fsdev_set_opts", 00:05:26.566 "params": { 00:05:26.566 "fsdev_io_pool_size": 65535, 00:05:26.566 "fsdev_io_cache_size": 256 00:05:26.566 } 00:05:26.566 } 00:05:26.566 ] 00:05:26.566 }, 00:05:26.566 { 00:05:26.566 "subsystem": "keyring", 00:05:26.566 "config": [] 00:05:26.566 }, 00:05:26.566 { 00:05:26.566 "subsystem": "iobuf", 00:05:26.566 "config": [ 00:05:26.566 { 00:05:26.566 "method": "iobuf_set_options", 00:05:26.566 "params": { 00:05:26.566 "small_pool_count": 8192, 00:05:26.566 "large_pool_count": 1024, 00:05:26.566 "small_bufsize": 8192, 00:05:26.566 "large_bufsize": 135168, 00:05:26.566 "enable_numa": false 00:05:26.566 } 00:05:26.566 } 00:05:26.566 ] 00:05:26.566 }, 00:05:26.566 { 00:05:26.566 "subsystem": "sock", 00:05:26.566 "config": [ 00:05:26.566 { 00:05:26.566 "method": "sock_set_default_impl", 00:05:26.566 "params": { 00:05:26.566 "impl_name": "posix" 00:05:26.566 } 00:05:26.566 }, 00:05:26.566 { 00:05:26.566 "method": "sock_impl_set_options", 00:05:26.566 "params": { 00:05:26.566 "impl_name": "ssl", 00:05:26.566 "recv_buf_size": 4096, 00:05:26.566 "send_buf_size": 4096, 00:05:26.566 "enable_recv_pipe": true, 00:05:26.566 "enable_quickack": false, 00:05:26.566 "enable_placement_id": 0, 00:05:26.566 "enable_zerocopy_send_server": true, 00:05:26.566 "enable_zerocopy_send_client": false, 00:05:26.566 "zerocopy_threshold": 0, 00:05:26.566 "tls_version": 0, 00:05:26.566 "enable_ktls": false 00:05:26.566 } 00:05:26.566 }, 00:05:26.566 { 00:05:26.566 "method": "sock_impl_set_options", 00:05:26.566 "params": { 00:05:26.566 "impl_name": "posix", 00:05:26.566 "recv_buf_size": 2097152, 00:05:26.566 "send_buf_size": 2097152, 00:05:26.566 "enable_recv_pipe": true, 00:05:26.566 "enable_quickack": false, 00:05:26.566 "enable_placement_id": 0, 00:05:26.566 "enable_zerocopy_send_server": true, 00:05:26.566 "enable_zerocopy_send_client": false, 00:05:26.566 "zerocopy_threshold": 0, 00:05:26.566 "tls_version": 0, 00:05:26.566 "enable_ktls": false 00:05:26.566 } 00:05:26.566 } 00:05:26.566 ] 00:05:26.566 }, 00:05:26.566 { 00:05:26.566 "subsystem": "vmd", 00:05:26.566 "config": [] 00:05:26.566 }, 00:05:26.566 { 00:05:26.566 "subsystem": "accel", 00:05:26.566 "config": [ 00:05:26.566 { 00:05:26.567 "method": "accel_set_options", 00:05:26.567 "params": { 00:05:26.567 "small_cache_size": 128, 00:05:26.567 "large_cache_size": 16, 00:05:26.567 "task_count": 2048, 00:05:26.567 "sequence_count": 2048, 00:05:26.567 "buf_count": 2048 00:05:26.567 } 00:05:26.567 } 00:05:26.567 ] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "bdev", 00:05:26.567 "config": [ 00:05:26.567 { 00:05:26.567 "method": "bdev_set_options", 00:05:26.567 "params": { 00:05:26.567 "bdev_io_pool_size": 65535, 00:05:26.567 "bdev_io_cache_size": 256, 00:05:26.567 "bdev_auto_examine": true, 00:05:26.567 "iobuf_small_cache_size": 128, 00:05:26.567 "iobuf_large_cache_size": 16 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "bdev_raid_set_options", 00:05:26.567 "params": { 00:05:26.567 "process_window_size_kb": 1024, 00:05:26.567 "process_max_bandwidth_mb_sec": 0 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "bdev_iscsi_set_options", 00:05:26.567 "params": { 00:05:26.567 "timeout_sec": 30 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "bdev_nvme_set_options", 00:05:26.567 "params": { 00:05:26.567 "action_on_timeout": "none", 00:05:26.567 "timeout_us": 0, 00:05:26.567 "timeout_admin_us": 0, 00:05:26.567 "keep_alive_timeout_ms": 10000, 00:05:26.567 "arbitration_burst": 0, 00:05:26.567 "low_priority_weight": 0, 00:05:26.567 "medium_priority_weight": 0, 00:05:26.567 "high_priority_weight": 0, 00:05:26.567 "nvme_adminq_poll_period_us": 10000, 00:05:26.567 "nvme_ioq_poll_period_us": 0, 00:05:26.567 "io_queue_requests": 0, 00:05:26.567 "delay_cmd_submit": true, 00:05:26.567 "transport_retry_count": 4, 00:05:26.567 "bdev_retry_count": 3, 00:05:26.567 "transport_ack_timeout": 0, 00:05:26.567 "ctrlr_loss_timeout_sec": 0, 00:05:26.567 "reconnect_delay_sec": 0, 00:05:26.567 "fast_io_fail_timeout_sec": 0, 00:05:26.567 "disable_auto_failback": false, 00:05:26.567 "generate_uuids": false, 00:05:26.567 "transport_tos": 0, 00:05:26.567 "nvme_error_stat": false, 00:05:26.567 "rdma_srq_size": 0, 00:05:26.567 "io_path_stat": false, 00:05:26.567 "allow_accel_sequence": false, 00:05:26.567 "rdma_max_cq_size": 0, 00:05:26.567 "rdma_cm_event_timeout_ms": 0, 00:05:26.567 "dhchap_digests": [ 00:05:26.567 "sha256", 00:05:26.567 "sha384", 00:05:26.567 "sha512" 00:05:26.567 ], 00:05:26.567 "dhchap_dhgroups": [ 00:05:26.567 "null", 00:05:26.567 "ffdhe2048", 00:05:26.567 "ffdhe3072", 00:05:26.567 "ffdhe4096", 00:05:26.567 "ffdhe6144", 00:05:26.567 "ffdhe8192" 00:05:26.567 ] 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "bdev_nvme_set_hotplug", 00:05:26.567 "params": { 00:05:26.567 "period_us": 100000, 00:05:26.567 "enable": false 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "bdev_wait_for_examine" 00:05:26.567 } 00:05:26.567 ] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "scsi", 00:05:26.567 "config": null 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "scheduler", 00:05:26.567 "config": [ 00:05:26.567 { 00:05:26.567 "method": "framework_set_scheduler", 00:05:26.567 "params": { 00:05:26.567 "name": "static" 00:05:26.567 } 00:05:26.567 } 00:05:26.567 ] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "vhost_scsi", 00:05:26.567 "config": [] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "vhost_blk", 00:05:26.567 "config": [] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "ublk", 00:05:26.567 "config": [] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "nbd", 00:05:26.567 "config": [] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "nvmf", 00:05:26.567 "config": [ 00:05:26.567 { 00:05:26.567 "method": "nvmf_set_config", 00:05:26.567 "params": { 00:05:26.567 "discovery_filter": "match_any", 00:05:26.567 "admin_cmd_passthru": { 00:05:26.567 "identify_ctrlr": false 00:05:26.567 }, 00:05:26.567 "dhchap_digests": [ 00:05:26.567 "sha256", 00:05:26.567 "sha384", 00:05:26.567 "sha512" 00:05:26.567 ], 00:05:26.567 "dhchap_dhgroups": [ 00:05:26.567 "null", 00:05:26.567 "ffdhe2048", 00:05:26.567 "ffdhe3072", 00:05:26.567 "ffdhe4096", 00:05:26.567 "ffdhe6144", 00:05:26.567 "ffdhe8192" 00:05:26.567 ] 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "nvmf_set_max_subsystems", 00:05:26.567 "params": { 00:05:26.567 "max_subsystems": 1024 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "nvmf_set_crdt", 00:05:26.567 "params": { 00:05:26.567 "crdt1": 0, 00:05:26.567 "crdt2": 0, 00:05:26.567 "crdt3": 0 00:05:26.567 } 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "method": "nvmf_create_transport", 00:05:26.567 "params": { 00:05:26.567 "trtype": "TCP", 00:05:26.567 "max_queue_depth": 128, 00:05:26.567 "max_io_qpairs_per_ctrlr": 127, 00:05:26.567 "in_capsule_data_size": 4096, 00:05:26.567 "max_io_size": 131072, 00:05:26.567 "io_unit_size": 131072, 00:05:26.567 "max_aq_depth": 128, 00:05:26.567 "num_shared_buffers": 511, 00:05:26.567 "buf_cache_size": 4294967295, 00:05:26.567 "dif_insert_or_strip": false, 00:05:26.567 "zcopy": false, 00:05:26.567 "c2h_success": true, 00:05:26.567 "sock_priority": 0, 00:05:26.567 "abort_timeout_sec": 1, 00:05:26.567 "ack_timeout": 0, 00:05:26.567 "data_wr_pool_size": 0 00:05:26.567 } 00:05:26.567 } 00:05:26.567 ] 00:05:26.567 }, 00:05:26.567 { 00:05:26.567 "subsystem": "iscsi", 00:05:26.567 "config": [ 00:05:26.567 { 00:05:26.567 "method": "iscsi_set_options", 00:05:26.567 "params": { 00:05:26.567 "node_base": "iqn.2016-06.io.spdk", 00:05:26.567 "max_sessions": 128, 00:05:26.567 "max_connections_per_session": 2, 00:05:26.567 "max_queue_depth": 64, 00:05:26.567 "default_time2wait": 2, 00:05:26.567 "default_time2retain": 20, 00:05:26.567 "first_burst_length": 8192, 00:05:26.567 "immediate_data": true, 00:05:26.567 "allow_duplicated_isid": false, 00:05:26.567 "error_recovery_level": 0, 00:05:26.567 "nop_timeout": 60, 00:05:26.567 "nop_in_interval": 30, 00:05:26.567 "disable_chap": false, 00:05:26.567 "require_chap": false, 00:05:26.567 "mutual_chap": false, 00:05:26.567 "chap_group": 0, 00:05:26.567 "max_large_datain_per_connection": 64, 00:05:26.567 "max_r2t_per_connection": 4, 00:05:26.567 "pdu_pool_size": 36864, 00:05:26.567 "immediate_data_pool_size": 16384, 00:05:26.567 "data_out_pool_size": 2048 00:05:26.567 } 00:05:26.567 } 00:05:26.567 ] 00:05:26.567 } 00:05:26.567 ] 00:05:26.567 } 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57170 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57170 ']' 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57170 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57170 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.567 killing process with pid 57170 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57170' 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57170 00:05:26.567 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57170 00:05:29.107 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57222 00:05:29.107 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.107 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57222 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57222 ']' 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57222 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57222 00:05:34.382 killing process with pid 57222 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57222' 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57222 00:05:34.382 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57222 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:36.290 00:05:36.290 real 0m11.014s 00:05:36.290 user 0m10.487s 00:05:36.290 sys 0m0.808s 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.290 ************************************ 00:05:36.290 END TEST skip_rpc_with_json 00:05:36.290 ************************************ 00:05:36.290 01:25:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:36.290 01:25:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.290 01:25:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.290 01:25:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.290 ************************************ 00:05:36.290 START TEST skip_rpc_with_delay 00:05:36.290 ************************************ 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:36.290 [2024-11-17 01:25:44.661901] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.290 00:05:36.290 real 0m0.152s 00:05:36.290 user 0m0.078s 00:05:36.290 sys 0m0.073s 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.290 01:25:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:36.290 ************************************ 00:05:36.290 END TEST skip_rpc_with_delay 00:05:36.290 ************************************ 00:05:36.550 01:25:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:36.550 01:25:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:36.550 01:25:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:36.550 01:25:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.550 01:25:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.550 01:25:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.550 ************************************ 00:05:36.550 START TEST exit_on_failed_rpc_init 00:05:36.550 ************************************ 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57354 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57354 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57354 ']' 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.550 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.550 [2024-11-17 01:25:44.877371] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:36.550 [2024-11-17 01:25:44.877507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57354 ] 00:05:36.809 [2024-11-17 01:25:45.046808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.809 [2024-11-17 01:25:45.152597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:37.748 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.748 [2024-11-17 01:25:46.056807] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:37.748 [2024-11-17 01:25:46.056933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57372 ] 00:05:38.008 [2024-11-17 01:25:46.228481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.008 [2024-11-17 01:25:46.339618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.008 [2024-11-17 01:25:46.339731] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:38.008 [2024-11-17 01:25:46.339744] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:38.008 [2024-11-17 01:25:46.339756] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57354 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57354 ']' 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57354 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57354 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.267 killing process with pid 57354 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57354' 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57354 00:05:38.267 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57354 00:05:40.806 00:05:40.806 real 0m4.045s 00:05:40.806 user 0m4.354s 00:05:40.806 sys 0m0.488s 00:05:40.806 01:25:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.806 01:25:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.806 ************************************ 00:05:40.806 END TEST exit_on_failed_rpc_init 00:05:40.806 ************************************ 00:05:40.806 01:25:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.806 00:05:40.806 real 0m22.995s 00:05:40.806 user 0m21.981s 00:05:40.806 sys 0m2.021s 00:05:40.806 01:25:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.806 01:25:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.806 ************************************ 00:05:40.806 END TEST skip_rpc 00:05:40.806 ************************************ 00:05:40.806 01:25:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.806 01:25:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.806 01:25:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.806 01:25:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.806 ************************************ 00:05:40.806 START TEST rpc_client 00:05:40.806 ************************************ 00:05:40.806 01:25:48 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:40.806 * Looking for test storage... 00:05:40.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:40.806 01:25:49 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.806 01:25:49 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.806 01:25:49 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.806 01:25:49 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.806 01:25:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.807 01:25:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:40.807 01:25:49 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.807 01:25:49 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.807 --rc genhtml_branch_coverage=1 00:05:40.807 --rc genhtml_function_coverage=1 00:05:40.807 --rc genhtml_legend=1 00:05:40.807 --rc geninfo_all_blocks=1 00:05:40.807 --rc geninfo_unexecuted_blocks=1 00:05:40.807 00:05:40.807 ' 00:05:40.807 01:25:49 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.807 --rc genhtml_branch_coverage=1 00:05:40.807 --rc genhtml_function_coverage=1 00:05:40.807 --rc genhtml_legend=1 00:05:40.807 --rc geninfo_all_blocks=1 00:05:40.807 --rc geninfo_unexecuted_blocks=1 00:05:40.807 00:05:40.807 ' 00:05:40.807 01:25:49 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.807 --rc genhtml_branch_coverage=1 00:05:40.807 --rc genhtml_function_coverage=1 00:05:40.807 --rc genhtml_legend=1 00:05:40.807 --rc geninfo_all_blocks=1 00:05:40.807 --rc geninfo_unexecuted_blocks=1 00:05:40.807 00:05:40.807 ' 00:05:40.807 01:25:49 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.807 --rc genhtml_branch_coverage=1 00:05:40.807 --rc genhtml_function_coverage=1 00:05:40.807 --rc genhtml_legend=1 00:05:40.807 --rc geninfo_all_blocks=1 00:05:40.807 --rc geninfo_unexecuted_blocks=1 00:05:40.807 00:05:40.807 ' 00:05:40.807 01:25:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:40.807 OK 00:05:40.807 01:25:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:40.807 00:05:40.807 real 0m0.283s 00:05:40.807 user 0m0.148s 00:05:40.807 sys 0m0.151s 00:05:40.807 01:25:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.807 01:25:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:40.807 ************************************ 00:05:40.807 END TEST rpc_client 00:05:40.807 ************************************ 00:05:41.067 01:25:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.067 01:25:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.067 01:25:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.067 01:25:49 -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 ************************************ 00:05:41.067 START TEST json_config 00:05:41.067 ************************************ 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.067 01:25:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.067 01:25:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.067 01:25:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.067 01:25:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.067 01:25:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.067 01:25:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.067 01:25:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.067 01:25:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:41.067 01:25:49 json_config -- scripts/common.sh@345 -- # : 1 00:05:41.067 01:25:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.067 01:25:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.067 01:25:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:41.067 01:25:49 json_config -- scripts/common.sh@353 -- # local d=1 00:05:41.067 01:25:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.067 01:25:49 json_config -- scripts/common.sh@355 -- # echo 1 00:05:41.067 01:25:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.067 01:25:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@353 -- # local d=2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.067 01:25:49 json_config -- scripts/common.sh@355 -- # echo 2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.067 01:25:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.067 01:25:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.067 01:25:49 json_config -- scripts/common.sh@368 -- # return 0 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.067 --rc genhtml_branch_coverage=1 00:05:41.067 --rc genhtml_function_coverage=1 00:05:41.067 --rc genhtml_legend=1 00:05:41.067 --rc geninfo_all_blocks=1 00:05:41.067 --rc geninfo_unexecuted_blocks=1 00:05:41.067 00:05:41.067 ' 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.067 --rc genhtml_branch_coverage=1 00:05:41.067 --rc genhtml_function_coverage=1 00:05:41.067 --rc genhtml_legend=1 00:05:41.067 --rc geninfo_all_blocks=1 00:05:41.067 --rc geninfo_unexecuted_blocks=1 00:05:41.067 00:05:41.067 ' 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.067 --rc genhtml_branch_coverage=1 00:05:41.067 --rc genhtml_function_coverage=1 00:05:41.067 --rc genhtml_legend=1 00:05:41.067 --rc geninfo_all_blocks=1 00:05:41.067 --rc geninfo_unexecuted_blocks=1 00:05:41.067 00:05:41.067 ' 00:05:41.067 01:25:49 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.067 --rc genhtml_branch_coverage=1 00:05:41.067 --rc genhtml_function_coverage=1 00:05:41.067 --rc genhtml_legend=1 00:05:41.067 --rc geninfo_all_blocks=1 00:05:41.067 --rc geninfo_unexecuted_blocks=1 00:05:41.067 00:05:41.067 ' 00:05:41.067 01:25:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c7dc3818-0928-4352-9452-31669c8201e1 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c7dc3818-0928-4352-9452-31669c8201e1 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.067 01:25:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.067 01:25:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.067 01:25:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.067 01:25:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.067 01:25:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.067 01:25:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.067 01:25:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.067 01:25:49 json_config -- paths/export.sh@5 -- # export PATH 00:05:41.067 01:25:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@51 -- # : 0 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.067 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.067 01:25:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.067 01:25:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.067 01:25:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:41.067 01:25:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:41.067 01:25:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:41.067 01:25:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:41.067 WARNING: No tests are enabled so not running JSON configuration tests 00:05:41.068 01:25:49 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:41.068 01:25:49 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:41.068 ************************************ 00:05:41.068 END TEST json_config 00:05:41.068 ************************************ 00:05:41.068 00:05:41.068 real 0m0.218s 00:05:41.068 user 0m0.129s 00:05:41.068 sys 0m0.098s 00:05:41.068 01:25:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.068 01:25:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.327 01:25:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.327 01:25:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.327 01:25:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.327 01:25:49 -- common/autotest_common.sh@10 -- # set +x 00:05:41.327 ************************************ 00:05:41.327 START TEST json_config_extra_key 00:05:41.327 ************************************ 00:05:41.327 01:25:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:41.327 01:25:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.327 01:25:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.327 01:25:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.327 01:25:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:41.327 01:25:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:41.328 01:25:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.328 01:25:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.328 --rc genhtml_branch_coverage=1 00:05:41.328 --rc genhtml_function_coverage=1 00:05:41.328 --rc genhtml_legend=1 00:05:41.328 --rc geninfo_all_blocks=1 00:05:41.328 --rc geninfo_unexecuted_blocks=1 00:05:41.328 00:05:41.328 ' 00:05:41.328 01:25:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.328 --rc genhtml_branch_coverage=1 00:05:41.328 --rc genhtml_function_coverage=1 00:05:41.328 --rc genhtml_legend=1 00:05:41.328 --rc geninfo_all_blocks=1 00:05:41.328 --rc geninfo_unexecuted_blocks=1 00:05:41.328 00:05:41.328 ' 00:05:41.328 01:25:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.328 --rc genhtml_branch_coverage=1 00:05:41.328 --rc genhtml_function_coverage=1 00:05:41.328 --rc genhtml_legend=1 00:05:41.328 --rc geninfo_all_blocks=1 00:05:41.328 --rc geninfo_unexecuted_blocks=1 00:05:41.328 00:05:41.328 ' 00:05:41.328 01:25:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.328 --rc genhtml_branch_coverage=1 00:05:41.328 --rc genhtml_function_coverage=1 00:05:41.328 --rc genhtml_legend=1 00:05:41.328 --rc geninfo_all_blocks=1 00:05:41.328 --rc geninfo_unexecuted_blocks=1 00:05:41.328 00:05:41.328 ' 00:05:41.328 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c7dc3818-0928-4352-9452-31669c8201e1 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c7dc3818-0928-4352-9452-31669c8201e1 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.328 01:25:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.328 01:25:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.328 01:25:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.328 01:25:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.328 01:25:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.328 01:25:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.588 01:25:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.588 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.588 01:25:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.588 INFO: launching applications... 00:05:41.588 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57578 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.588 Waiting for target to run... 00:05:41.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57578 /var/tmp/spdk_tgt.sock 00:05:41.588 01:25:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57578 ']' 00:05:41.588 01:25:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:41.588 01:25:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.588 01:25:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.588 01:25:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.588 01:25:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.588 01:25:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.588 [2024-11-17 01:25:49.899880] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:41.588 [2024-11-17 01:25:49.900013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57578 ] 00:05:41.848 [2024-11-17 01:25:50.284639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.107 [2024-11-17 01:25:50.383855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.675 01:25:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.675 01:25:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:42.675 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.675 INFO: shutting down applications... 00:05:42.675 01:25:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.675 01:25:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57578 ]] 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57578 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57578 00:05:42.675 01:25:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.300 01:25:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.300 01:25:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.300 01:25:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57578 00:05:43.300 01:25:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.885 01:25:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.886 01:25:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.886 01:25:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57578 00:05:43.886 01:25:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.145 01:25:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.145 01:25:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.145 01:25:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57578 00:05:44.145 01:25:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.714 01:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.714 01:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.714 01:25:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57578 00:05:44.714 01:25:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.283 01:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.283 01:25:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.283 01:25:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57578 00:05:45.283 01:25:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.859 01:25:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.859 01:25:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.859 01:25:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57578 00:05:45.859 01:25:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.859 01:25:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.859 01:25:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.859 SPDK target shutdown done 00:05:45.859 01:25:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.859 Success 00:05:45.859 01:25:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.859 00:05:45.859 real 0m4.535s 00:05:45.859 user 0m3.837s 00:05:45.859 sys 0m0.553s 00:05:45.859 01:25:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.859 01:25:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.859 ************************************ 00:05:45.859 END TEST json_config_extra_key 00:05:45.859 ************************************ 00:05:45.859 01:25:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.859 01:25:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.859 01:25:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.859 01:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.859 ************************************ 00:05:45.859 START TEST alias_rpc 00:05:45.859 ************************************ 00:05:45.859 01:25:54 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.859 * Looking for test storage... 00:05:45.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:45.859 01:25:54 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.859 01:25:54 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.859 01:25:54 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.118 01:25:54 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.118 01:25:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:46.118 01:25:54 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.118 01:25:54 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.118 --rc genhtml_branch_coverage=1 00:05:46.118 --rc genhtml_function_coverage=1 00:05:46.118 --rc genhtml_legend=1 00:05:46.118 --rc geninfo_all_blocks=1 00:05:46.118 --rc geninfo_unexecuted_blocks=1 00:05:46.118 00:05:46.118 ' 00:05:46.118 01:25:54 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.118 --rc genhtml_branch_coverage=1 00:05:46.119 --rc genhtml_function_coverage=1 00:05:46.119 --rc genhtml_legend=1 00:05:46.119 --rc geninfo_all_blocks=1 00:05:46.119 --rc geninfo_unexecuted_blocks=1 00:05:46.119 00:05:46.119 ' 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.119 --rc genhtml_branch_coverage=1 00:05:46.119 --rc genhtml_function_coverage=1 00:05:46.119 --rc genhtml_legend=1 00:05:46.119 --rc geninfo_all_blocks=1 00:05:46.119 --rc geninfo_unexecuted_blocks=1 00:05:46.119 00:05:46.119 ' 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.119 --rc genhtml_branch_coverage=1 00:05:46.119 --rc genhtml_function_coverage=1 00:05:46.119 --rc genhtml_legend=1 00:05:46.119 --rc geninfo_all_blocks=1 00:05:46.119 --rc geninfo_unexecuted_blocks=1 00:05:46.119 00:05:46.119 ' 00:05:46.119 01:25:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.119 01:25:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57689 00:05:46.119 01:25:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57689 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57689 ']' 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.119 01:25:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.119 01:25:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.119 [2024-11-17 01:25:54.476295] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:46.119 [2024-11-17 01:25:54.476731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57689 ] 00:05:46.378 [2024-11-17 01:25:54.646564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.379 [2024-11-17 01:25:54.754157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.318 01:25:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.318 01:25:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.318 01:25:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:47.578 01:25:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57689 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57689 ']' 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57689 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57689 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.578 killing process with pid 57689 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57689' 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 57689 00:05:47.578 01:25:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 57689 00:05:50.119 00:05:50.119 real 0m3.976s 00:05:50.119 user 0m3.963s 00:05:50.119 sys 0m0.578s 00:05:50.119 01:25:58 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.119 01:25:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.119 ************************************ 00:05:50.119 END TEST alias_rpc 00:05:50.119 ************************************ 00:05:50.119 01:25:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:50.119 01:25:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:50.119 01:25:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.119 01:25:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.119 01:25:58 -- common/autotest_common.sh@10 -- # set +x 00:05:50.119 ************************************ 00:05:50.119 START TEST spdkcli_tcp 00:05:50.119 ************************************ 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:50.119 * Looking for test storage... 00:05:50.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.119 01:25:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.119 --rc genhtml_branch_coverage=1 00:05:50.119 --rc genhtml_function_coverage=1 00:05:50.119 --rc genhtml_legend=1 00:05:50.119 --rc geninfo_all_blocks=1 00:05:50.119 --rc geninfo_unexecuted_blocks=1 00:05:50.119 00:05:50.119 ' 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57796 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.119 01:25:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57796 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57796 ']' 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.119 01:25:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.119 [2024-11-17 01:25:58.541387] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:50.119 [2024-11-17 01:25:58.541584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57796 ] 00:05:50.379 [2024-11-17 01:25:58.714272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.379 [2024-11-17 01:25:58.830414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.379 [2024-11-17 01:25:58.830447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.317 01:25:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.317 01:25:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:51.317 01:25:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.317 01:25:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57813 00:05:51.317 01:25:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.577 [ 00:05:51.577 "bdev_malloc_delete", 00:05:51.577 "bdev_malloc_create", 00:05:51.577 "bdev_null_resize", 00:05:51.577 "bdev_null_delete", 00:05:51.577 "bdev_null_create", 00:05:51.577 "bdev_nvme_cuse_unregister", 00:05:51.577 "bdev_nvme_cuse_register", 00:05:51.577 "bdev_opal_new_user", 00:05:51.577 "bdev_opal_set_lock_state", 00:05:51.577 "bdev_opal_delete", 00:05:51.577 "bdev_opal_get_info", 00:05:51.577 "bdev_opal_create", 00:05:51.577 "bdev_nvme_opal_revert", 00:05:51.577 "bdev_nvme_opal_init", 00:05:51.577 "bdev_nvme_send_cmd", 00:05:51.577 "bdev_nvme_set_keys", 00:05:51.577 "bdev_nvme_get_path_iostat", 00:05:51.577 "bdev_nvme_get_mdns_discovery_info", 00:05:51.577 "bdev_nvme_stop_mdns_discovery", 00:05:51.577 "bdev_nvme_start_mdns_discovery", 00:05:51.577 "bdev_nvme_set_multipath_policy", 00:05:51.577 "bdev_nvme_set_preferred_path", 00:05:51.577 "bdev_nvme_get_io_paths", 00:05:51.577 "bdev_nvme_remove_error_injection", 00:05:51.577 "bdev_nvme_add_error_injection", 00:05:51.577 "bdev_nvme_get_discovery_info", 00:05:51.577 "bdev_nvme_stop_discovery", 00:05:51.577 "bdev_nvme_start_discovery", 00:05:51.577 "bdev_nvme_get_controller_health_info", 00:05:51.577 "bdev_nvme_disable_controller", 00:05:51.577 "bdev_nvme_enable_controller", 00:05:51.577 "bdev_nvme_reset_controller", 00:05:51.577 "bdev_nvme_get_transport_statistics", 00:05:51.577 "bdev_nvme_apply_firmware", 00:05:51.577 "bdev_nvme_detach_controller", 00:05:51.577 "bdev_nvme_get_controllers", 00:05:51.577 "bdev_nvme_attach_controller", 00:05:51.577 "bdev_nvme_set_hotplug", 00:05:51.577 "bdev_nvme_set_options", 00:05:51.577 "bdev_passthru_delete", 00:05:51.577 "bdev_passthru_create", 00:05:51.577 "bdev_lvol_set_parent_bdev", 00:05:51.577 "bdev_lvol_set_parent", 00:05:51.577 "bdev_lvol_check_shallow_copy", 00:05:51.577 "bdev_lvol_start_shallow_copy", 00:05:51.577 "bdev_lvol_grow_lvstore", 00:05:51.577 "bdev_lvol_get_lvols", 00:05:51.577 "bdev_lvol_get_lvstores", 00:05:51.577 "bdev_lvol_delete", 00:05:51.577 "bdev_lvol_set_read_only", 00:05:51.577 "bdev_lvol_resize", 00:05:51.577 "bdev_lvol_decouple_parent", 00:05:51.577 "bdev_lvol_inflate", 00:05:51.577 "bdev_lvol_rename", 00:05:51.577 "bdev_lvol_clone_bdev", 00:05:51.577 "bdev_lvol_clone", 00:05:51.577 "bdev_lvol_snapshot", 00:05:51.577 "bdev_lvol_create", 00:05:51.577 "bdev_lvol_delete_lvstore", 00:05:51.577 "bdev_lvol_rename_lvstore", 00:05:51.577 "bdev_lvol_create_lvstore", 00:05:51.577 "bdev_raid_set_options", 00:05:51.577 "bdev_raid_remove_base_bdev", 00:05:51.577 "bdev_raid_add_base_bdev", 00:05:51.577 "bdev_raid_delete", 00:05:51.577 "bdev_raid_create", 00:05:51.577 "bdev_raid_get_bdevs", 00:05:51.577 "bdev_error_inject_error", 00:05:51.577 "bdev_error_delete", 00:05:51.577 "bdev_error_create", 00:05:51.577 "bdev_split_delete", 00:05:51.577 "bdev_split_create", 00:05:51.577 "bdev_delay_delete", 00:05:51.577 "bdev_delay_create", 00:05:51.577 "bdev_delay_update_latency", 00:05:51.577 "bdev_zone_block_delete", 00:05:51.577 "bdev_zone_block_create", 00:05:51.577 "blobfs_create", 00:05:51.577 "blobfs_detect", 00:05:51.577 "blobfs_set_cache_size", 00:05:51.577 "bdev_aio_delete", 00:05:51.577 "bdev_aio_rescan", 00:05:51.577 "bdev_aio_create", 00:05:51.577 "bdev_ftl_set_property", 00:05:51.577 "bdev_ftl_get_properties", 00:05:51.577 "bdev_ftl_get_stats", 00:05:51.577 "bdev_ftl_unmap", 00:05:51.577 "bdev_ftl_unload", 00:05:51.577 "bdev_ftl_delete", 00:05:51.577 "bdev_ftl_load", 00:05:51.577 "bdev_ftl_create", 00:05:51.577 "bdev_virtio_attach_controller", 00:05:51.577 "bdev_virtio_scsi_get_devices", 00:05:51.577 "bdev_virtio_detach_controller", 00:05:51.577 "bdev_virtio_blk_set_hotplug", 00:05:51.577 "bdev_iscsi_delete", 00:05:51.577 "bdev_iscsi_create", 00:05:51.577 "bdev_iscsi_set_options", 00:05:51.577 "accel_error_inject_error", 00:05:51.577 "ioat_scan_accel_module", 00:05:51.577 "dsa_scan_accel_module", 00:05:51.577 "iaa_scan_accel_module", 00:05:51.577 "keyring_file_remove_key", 00:05:51.577 "keyring_file_add_key", 00:05:51.577 "keyring_linux_set_options", 00:05:51.577 "fsdev_aio_delete", 00:05:51.577 "fsdev_aio_create", 00:05:51.577 "iscsi_get_histogram", 00:05:51.577 "iscsi_enable_histogram", 00:05:51.577 "iscsi_set_options", 00:05:51.577 "iscsi_get_auth_groups", 00:05:51.577 "iscsi_auth_group_remove_secret", 00:05:51.577 "iscsi_auth_group_add_secret", 00:05:51.577 "iscsi_delete_auth_group", 00:05:51.577 "iscsi_create_auth_group", 00:05:51.577 "iscsi_set_discovery_auth", 00:05:51.577 "iscsi_get_options", 00:05:51.577 "iscsi_target_node_request_logout", 00:05:51.577 "iscsi_target_node_set_redirect", 00:05:51.577 "iscsi_target_node_set_auth", 00:05:51.577 "iscsi_target_node_add_lun", 00:05:51.577 "iscsi_get_stats", 00:05:51.577 "iscsi_get_connections", 00:05:51.577 "iscsi_portal_group_set_auth", 00:05:51.577 "iscsi_start_portal_group", 00:05:51.577 "iscsi_delete_portal_group", 00:05:51.577 "iscsi_create_portal_group", 00:05:51.577 "iscsi_get_portal_groups", 00:05:51.577 "iscsi_delete_target_node", 00:05:51.577 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.577 "iscsi_target_node_add_pg_ig_maps", 00:05:51.577 "iscsi_create_target_node", 00:05:51.577 "iscsi_get_target_nodes", 00:05:51.577 "iscsi_delete_initiator_group", 00:05:51.577 "iscsi_initiator_group_remove_initiators", 00:05:51.577 "iscsi_initiator_group_add_initiators", 00:05:51.577 "iscsi_create_initiator_group", 00:05:51.577 "iscsi_get_initiator_groups", 00:05:51.577 "nvmf_set_crdt", 00:05:51.577 "nvmf_set_config", 00:05:51.577 "nvmf_set_max_subsystems", 00:05:51.577 "nvmf_stop_mdns_prr", 00:05:51.577 "nvmf_publish_mdns_prr", 00:05:51.577 "nvmf_subsystem_get_listeners", 00:05:51.577 "nvmf_subsystem_get_qpairs", 00:05:51.577 "nvmf_subsystem_get_controllers", 00:05:51.577 "nvmf_get_stats", 00:05:51.577 "nvmf_get_transports", 00:05:51.577 "nvmf_create_transport", 00:05:51.577 "nvmf_get_targets", 00:05:51.577 "nvmf_delete_target", 00:05:51.577 "nvmf_create_target", 00:05:51.577 "nvmf_subsystem_allow_any_host", 00:05:51.577 "nvmf_subsystem_set_keys", 00:05:51.577 "nvmf_subsystem_remove_host", 00:05:51.577 "nvmf_subsystem_add_host", 00:05:51.577 "nvmf_ns_remove_host", 00:05:51.577 "nvmf_ns_add_host", 00:05:51.577 "nvmf_subsystem_remove_ns", 00:05:51.577 "nvmf_subsystem_set_ns_ana_group", 00:05:51.577 "nvmf_subsystem_add_ns", 00:05:51.578 "nvmf_subsystem_listener_set_ana_state", 00:05:51.578 "nvmf_discovery_get_referrals", 00:05:51.578 "nvmf_discovery_remove_referral", 00:05:51.578 "nvmf_discovery_add_referral", 00:05:51.578 "nvmf_subsystem_remove_listener", 00:05:51.578 "nvmf_subsystem_add_listener", 00:05:51.578 "nvmf_delete_subsystem", 00:05:51.578 "nvmf_create_subsystem", 00:05:51.578 "nvmf_get_subsystems", 00:05:51.578 "env_dpdk_get_mem_stats", 00:05:51.578 "nbd_get_disks", 00:05:51.578 "nbd_stop_disk", 00:05:51.578 "nbd_start_disk", 00:05:51.578 "ublk_recover_disk", 00:05:51.578 "ublk_get_disks", 00:05:51.578 "ublk_stop_disk", 00:05:51.578 "ublk_start_disk", 00:05:51.578 "ublk_destroy_target", 00:05:51.578 "ublk_create_target", 00:05:51.578 "virtio_blk_create_transport", 00:05:51.578 "virtio_blk_get_transports", 00:05:51.578 "vhost_controller_set_coalescing", 00:05:51.578 "vhost_get_controllers", 00:05:51.578 "vhost_delete_controller", 00:05:51.578 "vhost_create_blk_controller", 00:05:51.578 "vhost_scsi_controller_remove_target", 00:05:51.578 "vhost_scsi_controller_add_target", 00:05:51.578 "vhost_start_scsi_controller", 00:05:51.578 "vhost_create_scsi_controller", 00:05:51.578 "thread_set_cpumask", 00:05:51.578 "scheduler_set_options", 00:05:51.578 "framework_get_governor", 00:05:51.578 "framework_get_scheduler", 00:05:51.578 "framework_set_scheduler", 00:05:51.578 "framework_get_reactors", 00:05:51.578 "thread_get_io_channels", 00:05:51.578 "thread_get_pollers", 00:05:51.578 "thread_get_stats", 00:05:51.578 "framework_monitor_context_switch", 00:05:51.578 "spdk_kill_instance", 00:05:51.578 "log_enable_timestamps", 00:05:51.578 "log_get_flags", 00:05:51.578 "log_clear_flag", 00:05:51.578 "log_set_flag", 00:05:51.578 "log_get_level", 00:05:51.578 "log_set_level", 00:05:51.578 "log_get_print_level", 00:05:51.578 "log_set_print_level", 00:05:51.578 "framework_enable_cpumask_locks", 00:05:51.578 "framework_disable_cpumask_locks", 00:05:51.578 "framework_wait_init", 00:05:51.578 "framework_start_init", 00:05:51.578 "scsi_get_devices", 00:05:51.578 "bdev_get_histogram", 00:05:51.578 "bdev_enable_histogram", 00:05:51.578 "bdev_set_qos_limit", 00:05:51.578 "bdev_set_qd_sampling_period", 00:05:51.578 "bdev_get_bdevs", 00:05:51.578 "bdev_reset_iostat", 00:05:51.578 "bdev_get_iostat", 00:05:51.578 "bdev_examine", 00:05:51.578 "bdev_wait_for_examine", 00:05:51.578 "bdev_set_options", 00:05:51.578 "accel_get_stats", 00:05:51.578 "accel_set_options", 00:05:51.578 "accel_set_driver", 00:05:51.578 "accel_crypto_key_destroy", 00:05:51.578 "accel_crypto_keys_get", 00:05:51.578 "accel_crypto_key_create", 00:05:51.578 "accel_assign_opc", 00:05:51.578 "accel_get_module_info", 00:05:51.578 "accel_get_opc_assignments", 00:05:51.578 "vmd_rescan", 00:05:51.578 "vmd_remove_device", 00:05:51.578 "vmd_enable", 00:05:51.578 "sock_get_default_impl", 00:05:51.578 "sock_set_default_impl", 00:05:51.578 "sock_impl_set_options", 00:05:51.578 "sock_impl_get_options", 00:05:51.578 "iobuf_get_stats", 00:05:51.578 "iobuf_set_options", 00:05:51.578 "keyring_get_keys", 00:05:51.578 "framework_get_pci_devices", 00:05:51.578 "framework_get_config", 00:05:51.578 "framework_get_subsystems", 00:05:51.578 "fsdev_set_opts", 00:05:51.578 "fsdev_get_opts", 00:05:51.578 "trace_get_info", 00:05:51.578 "trace_get_tpoint_group_mask", 00:05:51.578 "trace_disable_tpoint_group", 00:05:51.578 "trace_enable_tpoint_group", 00:05:51.578 "trace_clear_tpoint_mask", 00:05:51.578 "trace_set_tpoint_mask", 00:05:51.578 "notify_get_notifications", 00:05:51.578 "notify_get_types", 00:05:51.578 "spdk_get_version", 00:05:51.578 "rpc_get_methods" 00:05:51.578 ] 00:05:51.578 01:25:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.578 01:25:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.578 01:25:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57796 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57796 ']' 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57796 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57796 00:05:51.578 killing process with pid 57796 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57796' 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57796 00:05:51.578 01:25:59 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57796 00:05:54.115 ************************************ 00:05:54.115 END TEST spdkcli_tcp 00:05:54.115 ************************************ 00:05:54.115 00:05:54.115 real 0m3.986s 00:05:54.115 user 0m7.021s 00:05:54.115 sys 0m0.620s 00:05:54.115 01:26:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.115 01:26:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.115 01:26:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.115 01:26:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.115 01:26:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.115 01:26:02 -- common/autotest_common.sh@10 -- # set +x 00:05:54.115 ************************************ 00:05:54.115 START TEST dpdk_mem_utility 00:05:54.115 ************************************ 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.115 * Looking for test storage... 00:05:54.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.115 01:26:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.115 --rc genhtml_branch_coverage=1 00:05:54.115 --rc genhtml_function_coverage=1 00:05:54.115 --rc genhtml_legend=1 00:05:54.115 --rc geninfo_all_blocks=1 00:05:54.115 --rc geninfo_unexecuted_blocks=1 00:05:54.115 00:05:54.115 ' 00:05:54.115 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.115 --rc genhtml_branch_coverage=1 00:05:54.115 --rc genhtml_function_coverage=1 00:05:54.116 --rc genhtml_legend=1 00:05:54.116 --rc geninfo_all_blocks=1 00:05:54.116 --rc geninfo_unexecuted_blocks=1 00:05:54.116 00:05:54.116 ' 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.116 --rc genhtml_branch_coverage=1 00:05:54.116 --rc genhtml_function_coverage=1 00:05:54.116 --rc genhtml_legend=1 00:05:54.116 --rc geninfo_all_blocks=1 00:05:54.116 --rc geninfo_unexecuted_blocks=1 00:05:54.116 00:05:54.116 ' 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.116 --rc genhtml_branch_coverage=1 00:05:54.116 --rc genhtml_function_coverage=1 00:05:54.116 --rc genhtml_legend=1 00:05:54.116 --rc geninfo_all_blocks=1 00:05:54.116 --rc geninfo_unexecuted_blocks=1 00:05:54.116 00:05:54.116 ' 00:05:54.116 01:26:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.116 01:26:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57917 00:05:54.116 01:26:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.116 01:26:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57917 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57917 ']' 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.116 01:26:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.116 [2024-11-17 01:26:02.570653] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:54.116 [2024-11-17 01:26:02.570839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57917 ] 00:05:54.375 [2024-11-17 01:26:02.740001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.635 [2024-11-17 01:26:02.849905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.576 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.576 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:55.576 01:26:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:55.576 01:26:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:55.576 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.576 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.576 { 00:05:55.576 "filename": "/tmp/spdk_mem_dump.txt" 00:05:55.576 } 00:05:55.576 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.576 01:26:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:55.576 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:55.576 1 heaps totaling size 816.000000 MiB 00:05:55.576 size: 816.000000 MiB heap id: 0 00:05:55.576 end heaps---------- 00:05:55.576 9 mempools totaling size 595.772034 MiB 00:05:55.576 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:55.576 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:55.576 size: 92.545471 MiB name: bdev_io_57917 00:05:55.576 size: 50.003479 MiB name: msgpool_57917 00:05:55.576 size: 36.509338 MiB name: fsdev_io_57917 00:05:55.576 size: 21.763794 MiB name: PDU_Pool 00:05:55.576 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:55.576 size: 4.133484 MiB name: evtpool_57917 00:05:55.576 size: 0.026123 MiB name: Session_Pool 00:05:55.576 end mempools------- 00:05:55.576 6 memzones totaling size 4.142822 MiB 00:05:55.576 size: 1.000366 MiB name: RG_ring_0_57917 00:05:55.576 size: 1.000366 MiB name: RG_ring_1_57917 00:05:55.576 size: 1.000366 MiB name: RG_ring_4_57917 00:05:55.576 size: 1.000366 MiB name: RG_ring_5_57917 00:05:55.576 size: 0.125366 MiB name: RG_ring_2_57917 00:05:55.576 size: 0.015991 MiB name: RG_ring_3_57917 00:05:55.576 end memzones------- 00:05:55.576 01:26:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:55.576 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:05:55.576 list of free elements. size: 16.790161 MiB 00:05:55.576 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:55.576 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:55.576 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:55.576 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:55.576 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:55.576 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:55.576 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:55.576 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:55.576 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:55.576 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:55.576 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:55.576 element at address: 0x20001ac00000 with size: 0.560486 MiB 00:05:55.576 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:55.576 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:55.577 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:55.577 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:55.577 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:55.577 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:55.577 list of standard malloc elements. size: 199.288940 MiB 00:05:55.577 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:55.577 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:55.577 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:55.577 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:55.577 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:55.577 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:55.577 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:55.577 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:55.577 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:55.577 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:55.577 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:55.577 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:55.577 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:55.577 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:55.578 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:55.578 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:55.578 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:55.579 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:55.579 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:55.579 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:55.580 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:55.580 list of memzone associated elements. size: 599.920898 MiB 00:05:55.580 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:55.580 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:55.580 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:55.580 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:55.580 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:55.580 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57917_0 00:05:55.580 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:55.580 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57917_0 00:05:55.580 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:55.580 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57917_0 00:05:55.580 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:55.580 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:55.580 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:55.580 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:55.580 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:55.580 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57917_0 00:05:55.580 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:55.580 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57917 00:05:55.580 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:55.580 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57917 00:05:55.580 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:55.580 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:55.580 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:55.580 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:55.580 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:55.580 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:55.580 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:55.580 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:55.580 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:55.580 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57917 00:05:55.580 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:55.580 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57917 00:05:55.580 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:55.580 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57917 00:05:55.580 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:55.580 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57917 00:05:55.580 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:55.580 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57917 00:05:55.580 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:55.580 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57917 00:05:55.580 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:55.580 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:55.580 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:55.580 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:55.580 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:55.580 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:55.580 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:55.581 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57917 00:05:55.581 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:55.581 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57917 00:05:55.581 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:55.581 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:55.581 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:55.581 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:55.581 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:55.581 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57917 00:05:55.581 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:55.581 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:55.581 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:55.581 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57917 00:05:55.581 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:55.581 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57917 00:05:55.581 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:55.581 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57917 00:05:55.581 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:55.581 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:55.581 01:26:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:55.581 01:26:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57917 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57917 ']' 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57917 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57917 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57917' 00:05:55.581 killing process with pid 57917 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57917 00:05:55.581 01:26:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57917 00:05:58.121 00:05:58.121 real 0m3.854s 00:05:58.121 user 0m3.776s 00:05:58.121 sys 0m0.545s 00:05:58.121 01:26:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.121 01:26:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.121 ************************************ 00:05:58.121 END TEST dpdk_mem_utility 00:05:58.121 ************************************ 00:05:58.121 01:26:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.121 01:26:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.121 01:26:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.121 01:26:06 -- common/autotest_common.sh@10 -- # set +x 00:05:58.121 ************************************ 00:05:58.121 START TEST event 00:05:58.121 ************************************ 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.121 * Looking for test storage... 00:05:58.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.121 01:26:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.121 01:26:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.121 01:26:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.121 01:26:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.121 01:26:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.121 01:26:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.121 01:26:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.121 01:26:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.121 01:26:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.121 01:26:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.121 01:26:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.121 01:26:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:58.121 01:26:06 event -- scripts/common.sh@345 -- # : 1 00:05:58.121 01:26:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.121 01:26:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.121 01:26:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:58.121 01:26:06 event -- scripts/common.sh@353 -- # local d=1 00:05:58.121 01:26:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.121 01:26:06 event -- scripts/common.sh@355 -- # echo 1 00:05:58.121 01:26:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.121 01:26:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:58.121 01:26:06 event -- scripts/common.sh@353 -- # local d=2 00:05:58.121 01:26:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.121 01:26:06 event -- scripts/common.sh@355 -- # echo 2 00:05:58.121 01:26:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.121 01:26:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.121 01:26:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.121 01:26:06 event -- scripts/common.sh@368 -- # return 0 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.121 --rc genhtml_branch_coverage=1 00:05:58.121 --rc genhtml_function_coverage=1 00:05:58.121 --rc genhtml_legend=1 00:05:58.121 --rc geninfo_all_blocks=1 00:05:58.121 --rc geninfo_unexecuted_blocks=1 00:05:58.121 00:05:58.121 ' 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.121 --rc genhtml_branch_coverage=1 00:05:58.121 --rc genhtml_function_coverage=1 00:05:58.121 --rc genhtml_legend=1 00:05:58.121 --rc geninfo_all_blocks=1 00:05:58.121 --rc geninfo_unexecuted_blocks=1 00:05:58.121 00:05:58.121 ' 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.121 --rc genhtml_branch_coverage=1 00:05:58.121 --rc genhtml_function_coverage=1 00:05:58.121 --rc genhtml_legend=1 00:05:58.121 --rc geninfo_all_blocks=1 00:05:58.121 --rc geninfo_unexecuted_blocks=1 00:05:58.121 00:05:58.121 ' 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.121 --rc genhtml_branch_coverage=1 00:05:58.121 --rc genhtml_function_coverage=1 00:05:58.121 --rc genhtml_legend=1 00:05:58.121 --rc geninfo_all_blocks=1 00:05:58.121 --rc geninfo_unexecuted_blocks=1 00:05:58.121 00:05:58.121 ' 00:05:58.121 01:26:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:58.121 01:26:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.121 01:26:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:58.121 01:26:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.121 01:26:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.121 ************************************ 00:05:58.121 START TEST event_perf 00:05:58.121 ************************************ 00:05:58.121 01:26:06 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.121 Running I/O for 1 seconds...[2024-11-17 01:26:06.466014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:58.121 [2024-11-17 01:26:06.466131] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58021 ] 00:05:58.381 [2024-11-17 01:26:06.624636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.381 [2024-11-17 01:26:06.743453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.381 [2024-11-17 01:26:06.743647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.381 [2024-11-17 01:26:06.743775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.381 [2024-11-17 01:26:06.743745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.764 Running I/O for 1 seconds... 00:05:59.764 lcore 0: 107441 00:05:59.764 lcore 1: 107439 00:05:59.764 lcore 2: 107437 00:05:59.764 lcore 3: 107440 00:05:59.764 done. 00:05:59.764 00:05:59.764 real 0m1.563s 00:05:59.764 user 0m4.334s 00:05:59.764 sys 0m0.108s 00:05:59.764 01:26:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.764 ************************************ 00:05:59.764 END TEST event_perf 00:05:59.764 ************************************ 00:05:59.764 01:26:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.764 01:26:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.764 01:26:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:59.764 01:26:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.764 01:26:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.764 ************************************ 00:05:59.764 START TEST event_reactor 00:05:59.764 ************************************ 00:05:59.764 01:26:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.764 [2024-11-17 01:26:08.090555] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:59.764 [2024-11-17 01:26:08.090651] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:06:00.024 [2024-11-17 01:26:08.264457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.024 [2024-11-17 01:26:08.371387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.404 test_start 00:06:01.404 oneshot 00:06:01.404 tick 100 00:06:01.404 tick 100 00:06:01.404 tick 250 00:06:01.404 tick 100 00:06:01.404 tick 100 00:06:01.404 tick 100 00:06:01.404 tick 250 00:06:01.404 tick 500 00:06:01.404 tick 100 00:06:01.404 tick 100 00:06:01.404 tick 250 00:06:01.404 tick 100 00:06:01.404 tick 100 00:06:01.404 test_end 00:06:01.404 00:06:01.404 real 0m1.539s 00:06:01.404 user 0m1.338s 00:06:01.404 sys 0m0.091s 00:06:01.404 01:26:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.404 ************************************ 00:06:01.405 END TEST event_reactor 00:06:01.405 ************************************ 00:06:01.405 01:26:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:01.405 01:26:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.405 01:26:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:01.405 01:26:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.405 01:26:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.405 ************************************ 00:06:01.405 START TEST event_reactor_perf 00:06:01.405 ************************************ 00:06:01.405 01:26:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.405 [2024-11-17 01:26:09.697439] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:01.405 [2024-11-17 01:26:09.697608] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58097 ] 00:06:01.664 [2024-11-17 01:26:09.871620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.664 [2024-11-17 01:26:09.983864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.045 test_start 00:06:03.045 test_end 00:06:03.045 Performance: 401366 events per second 00:06:03.045 00:06:03.045 real 0m1.554s 00:06:03.045 user 0m1.344s 00:06:03.045 sys 0m0.099s 00:06:03.045 01:26:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.045 ************************************ 00:06:03.045 END TEST event_reactor_perf 00:06:03.045 ************************************ 00:06:03.045 01:26:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.045 01:26:11 event -- event/event.sh@49 -- # uname -s 00:06:03.045 01:26:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:03.045 01:26:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.045 01:26:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.045 01:26:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.045 01:26:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.045 ************************************ 00:06:03.045 START TEST event_scheduler 00:06:03.045 ************************************ 00:06:03.045 01:26:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.045 * Looking for test storage... 00:06:03.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:03.045 01:26:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.045 01:26:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.045 01:26:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.045 01:26:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.045 01:26:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:03.046 01:26:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.046 01:26:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.046 01:26:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.046 01:26:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.046 01:26:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.046 --rc genhtml_branch_coverage=1 00:06:03.046 --rc genhtml_function_coverage=1 00:06:03.046 --rc genhtml_legend=1 00:06:03.046 --rc geninfo_all_blocks=1 00:06:03.046 --rc geninfo_unexecuted_blocks=1 00:06:03.046 00:06:03.046 ' 00:06:03.305 01:26:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:03.305 01:26:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58173 00:06:03.305 01:26:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:03.305 01:26:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.305 01:26:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58173 00:06:03.305 01:26:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58173 ']' 00:06:03.305 01:26:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.305 01:26:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.306 01:26:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.306 01:26:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.306 01:26:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.306 [2024-11-17 01:26:11.587246] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:03.306 [2024-11-17 01:26:11.587782] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:06:03.306 [2024-11-17 01:26:11.760309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.564 [2024-11-17 01:26:11.909031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.564 [2024-11-17 01:26:11.909219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.564 [2024-11-17 01:26:11.909419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.564 [2024-11-17 01:26:11.909727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.132 01:26:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.132 01:26:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:04.132 01:26:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:04.132 01:26:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.132 01:26:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.132 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.132 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.132 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.132 POWER: Cannot set governor of lcore 0 to performance 00:06:04.132 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.132 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.132 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.132 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.132 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:04.132 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:04.132 POWER: Unable to set Power Management Environment for lcore 0 00:06:04.132 [2024-11-17 01:26:12.427600] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:04.132 [2024-11-17 01:26:12.427732] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:04.132 [2024-11-17 01:26:12.427787] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:04.132 [2024-11-17 01:26:12.427890] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:04.132 [2024-11-17 01:26:12.428004] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:04.132 [2024-11-17 01:26:12.428040] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:04.132 01:26:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.132 01:26:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:04.132 01:26:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.132 01:26:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.391 [2024-11-17 01:26:12.803414] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.391 01:26:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.391 01:26:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.391 01:26:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.391 01:26:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.391 01:26:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.391 ************************************ 00:06:04.391 START TEST scheduler_create_thread 00:06:04.391 ************************************ 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.391 2 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.391 3 00:06:04.391 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 4 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 5 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 6 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 7 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 8 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 9 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.650 10 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.650 01:26:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.030 01:26:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.030 01:26:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:06.030 01:26:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:06.030 01:26:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.030 01:26:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.969 01:26:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.969 01:26:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.970 01:26:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.970 01:26:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.557 01:26:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.557 01:26:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.557 01:26:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.557 01:26:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.557 01:26:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.521 01:26:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.521 00:06:08.521 real 0m3.886s 00:06:08.521 user 0m0.030s 00:06:08.521 sys 0m0.008s 00:06:08.521 01:26:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.521 01:26:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.521 ************************************ 00:06:08.521 END TEST scheduler_create_thread 00:06:08.521 ************************************ 00:06:08.521 01:26:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.521 01:26:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58173 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58173 ']' 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58173 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58173 00:06:08.521 killing process with pid 58173 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58173' 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58173 00:06:08.521 01:26:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58173 00:06:08.781 [2024-11-17 01:26:17.082808] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:10.163 ************************************ 00:06:10.163 END TEST event_scheduler 00:06:10.163 ************************************ 00:06:10.163 00:06:10.163 real 0m7.038s 00:06:10.163 user 0m14.334s 00:06:10.163 sys 0m0.611s 00:06:10.163 01:26:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.163 01:26:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.163 01:26:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:10.163 01:26:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:10.163 01:26:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.163 01:26:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.163 01:26:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.163 ************************************ 00:06:10.163 START TEST app_repeat 00:06:10.163 ************************************ 00:06:10.163 01:26:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58301 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58301' 00:06:10.163 Process app_repeat pid: 58301 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:10.163 spdk_app_start Round 0 00:06:10.163 01:26:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:06:10.163 01:26:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:06:10.163 01:26:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.163 01:26:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.164 01:26:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.164 01:26:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.164 01:26:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.164 [2024-11-17 01:26:18.456436] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:10.164 [2024-11-17 01:26:18.456650] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58301 ] 00:06:10.164 [2024-11-17 01:26:18.615967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.423 [2024-11-17 01:26:18.729803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.423 [2024-11-17 01:26:18.729856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.998 01:26:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.999 01:26:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.999 01:26:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.271 Malloc0 00:06:11.271 01:26:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.531 Malloc1 00:06:11.531 01:26:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.531 01:26:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.791 /dev/nbd0 00:06:11.791 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.791 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.791 1+0 records in 00:06:11.791 1+0 records out 00:06:11.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623298 s, 6.6 MB/s 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.791 01:26:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:11.791 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.791 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.791 01:26:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.051 /dev/nbd1 00:06:12.051 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.051 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.051 1+0 records in 00:06:12.051 1+0 records out 00:06:12.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423038 s, 9.7 MB/s 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.051 01:26:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.051 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.051 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.051 01:26:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.051 01:26:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.051 01:26:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.310 { 00:06:12.310 "nbd_device": "/dev/nbd0", 00:06:12.310 "bdev_name": "Malloc0" 00:06:12.310 }, 00:06:12.310 { 00:06:12.310 "nbd_device": "/dev/nbd1", 00:06:12.310 "bdev_name": "Malloc1" 00:06:12.310 } 00:06:12.310 ]' 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.310 { 00:06:12.310 "nbd_device": "/dev/nbd0", 00:06:12.310 "bdev_name": "Malloc0" 00:06:12.310 }, 00:06:12.310 { 00:06:12.310 "nbd_device": "/dev/nbd1", 00:06:12.310 "bdev_name": "Malloc1" 00:06:12.310 } 00:06:12.310 ]' 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.310 /dev/nbd1' 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.310 /dev/nbd1' 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.310 01:26:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.311 01:26:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.570 256+0 records in 00:06:12.570 256+0 records out 00:06:12.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01229 s, 85.3 MB/s 00:06:12.570 01:26:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.570 01:26:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.571 256+0 records in 00:06:12.571 256+0 records out 00:06:12.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287597 s, 36.5 MB/s 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.571 256+0 records in 00:06:12.571 256+0 records out 00:06:12.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313825 s, 33.4 MB/s 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.571 01:26:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.831 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.092 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.352 01:26:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.352 01:26:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.612 01:26:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.993 [2024-11-17 01:26:23.026916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.993 [2024-11-17 01:26:23.138364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.993 [2024-11-17 01:26:23.138366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.993 [2024-11-17 01:26:23.327049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.993 [2024-11-17 01:26:23.327255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.926 spdk_app_start Round 1 00:06:16.926 01:26:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.926 01:26:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:16.926 01:26:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:06:16.926 01:26:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:06:16.926 01:26:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.926 01:26:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.926 01:26:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.926 01:26:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.926 01:26:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.926 01:26:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.926 01:26:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:16.926 01:26:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.186 Malloc0 00:06:17.186 01:26:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.446 Malloc1 00:06:17.446 01:26:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.446 /dev/nbd0 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.446 1+0 records in 00:06:17.446 1+0 records out 00:06:17.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384504 s, 10.7 MB/s 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.446 01:26:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.446 01:26:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.707 /dev/nbd1 00:06:17.707 01:26:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.707 01:26:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.707 1+0 records in 00:06:17.707 1+0 records out 00:06:17.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209061 s, 19.6 MB/s 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.707 01:26:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:17.707 01:26:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.707 01:26:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.707 01:26:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.707 01:26:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.707 01:26:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.967 { 00:06:17.967 "nbd_device": "/dev/nbd0", 00:06:17.967 "bdev_name": "Malloc0" 00:06:17.967 }, 00:06:17.967 { 00:06:17.967 "nbd_device": "/dev/nbd1", 00:06:17.967 "bdev_name": "Malloc1" 00:06:17.967 } 00:06:17.967 ]' 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.967 { 00:06:17.967 "nbd_device": "/dev/nbd0", 00:06:17.967 "bdev_name": "Malloc0" 00:06:17.967 }, 00:06:17.967 { 00:06:17.967 "nbd_device": "/dev/nbd1", 00:06:17.967 "bdev_name": "Malloc1" 00:06:17.967 } 00:06:17.967 ]' 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.967 /dev/nbd1' 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.967 /dev/nbd1' 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.967 01:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.968 01:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.968 01:26:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.968 01:26:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.968 01:26:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.968 01:26:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.228 256+0 records in 00:06:18.228 256+0 records out 00:06:18.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146511 s, 71.6 MB/s 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.228 256+0 records in 00:06:18.228 256+0 records out 00:06:18.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256939 s, 40.8 MB/s 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.228 256+0 records in 00:06:18.228 256+0 records out 00:06:18.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298561 s, 35.1 MB/s 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.228 01:26:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.488 01:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.488 01:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.488 01:26:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.489 01:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.489 01:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.489 01:26:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.489 01:26:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.489 01:26:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.489 01:26:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.489 01:26:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.748 01:26:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.748 01:26:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.748 01:26:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.749 01:26:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.008 01:26:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.008 01:26:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.268 01:26:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.649 [2024-11-17 01:26:28.730485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.649 [2024-11-17 01:26:28.839247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.649 [2024-11-17 01:26:28.839273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.649 [2024-11-17 01:26:29.028493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.649 [2024-11-17 01:26:29.028554] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.566 spdk_app_start Round 2 00:06:22.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.566 01:26:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.566 01:26:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:22.566 01:26:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.566 01:26:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:22.566 01:26:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.864 Malloc0 00:06:22.864 01:26:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.124 Malloc1 00:06:23.124 01:26:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.124 /dev/nbd0 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.124 01:26:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.125 1+0 records in 00:06:23.125 1+0 records out 00:06:23.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340936 s, 12.0 MB/s 00:06:23.125 01:26:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.385 /dev/nbd1 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.385 1+0 records in 00:06:23.385 1+0 records out 00:06:23.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225122 s, 18.2 MB/s 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.385 01:26:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.385 01:26:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.646 { 00:06:23.646 "nbd_device": "/dev/nbd0", 00:06:23.646 "bdev_name": "Malloc0" 00:06:23.646 }, 00:06:23.646 { 00:06:23.646 "nbd_device": "/dev/nbd1", 00:06:23.646 "bdev_name": "Malloc1" 00:06:23.646 } 00:06:23.646 ]' 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.646 { 00:06:23.646 "nbd_device": "/dev/nbd0", 00:06:23.646 "bdev_name": "Malloc0" 00:06:23.646 }, 00:06:23.646 { 00:06:23.646 "nbd_device": "/dev/nbd1", 00:06:23.646 "bdev_name": "Malloc1" 00:06:23.646 } 00:06:23.646 ]' 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.646 /dev/nbd1' 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.646 /dev/nbd1' 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.646 01:26:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.906 256+0 records in 00:06:23.906 256+0 records out 00:06:23.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144775 s, 72.4 MB/s 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.906 256+0 records in 00:06:23.906 256+0 records out 00:06:23.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246604 s, 42.5 MB/s 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.906 256+0 records in 00:06:23.906 256+0 records out 00:06:23.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255472 s, 41.0 MB/s 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.906 01:26:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.166 01:26:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.426 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.686 01:26:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.686 01:26:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.945 01:26:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.326 [2024-11-17 01:26:34.408157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.326 [2024-11-17 01:26:34.519343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.326 [2024-11-17 01:26:34.519346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.326 [2024-11-17 01:26:34.710274] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.326 [2024-11-17 01:26:34.710356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.235 01:26:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58301 /var/tmp/spdk-nbd.sock 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.236 01:26:36 event.app_repeat -- event/event.sh@39 -- # killprocess 58301 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58301 ']' 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58301 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58301 00:06:28.236 killing process with pid 58301 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58301' 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58301 00:06:28.236 01:26:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58301 00:06:29.175 spdk_app_start is called in Round 0. 00:06:29.175 Shutdown signal received, stop current app iteration 00:06:29.175 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:29.175 spdk_app_start is called in Round 1. 00:06:29.175 Shutdown signal received, stop current app iteration 00:06:29.175 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:29.175 spdk_app_start is called in Round 2. 00:06:29.175 Shutdown signal received, stop current app iteration 00:06:29.175 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:29.175 spdk_app_start is called in Round 3. 00:06:29.175 Shutdown signal received, stop current app iteration 00:06:29.175 01:26:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.175 01:26:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.175 00:06:29.175 real 0m19.181s 00:06:29.175 user 0m41.145s 00:06:29.175 sys 0m2.644s 00:06:29.175 01:26:37 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.175 ************************************ 00:06:29.175 END TEST app_repeat 00:06:29.175 ************************************ 00:06:29.175 01:26:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.175 01:26:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.175 01:26:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.175 01:26:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.175 01:26:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.175 01:26:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.175 ************************************ 00:06:29.175 START TEST cpu_locks 00:06:29.175 ************************************ 00:06:29.175 01:26:37 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:29.435 * Looking for test storage... 00:06:29.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.435 01:26:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.435 --rc genhtml_branch_coverage=1 00:06:29.435 --rc genhtml_function_coverage=1 00:06:29.435 --rc genhtml_legend=1 00:06:29.435 --rc geninfo_all_blocks=1 00:06:29.435 --rc geninfo_unexecuted_blocks=1 00:06:29.435 00:06:29.435 ' 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.435 --rc genhtml_branch_coverage=1 00:06:29.435 --rc genhtml_function_coverage=1 00:06:29.435 --rc genhtml_legend=1 00:06:29.435 --rc geninfo_all_blocks=1 00:06:29.435 --rc geninfo_unexecuted_blocks=1 00:06:29.435 00:06:29.435 ' 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.435 --rc genhtml_branch_coverage=1 00:06:29.435 --rc genhtml_function_coverage=1 00:06:29.435 --rc genhtml_legend=1 00:06:29.435 --rc geninfo_all_blocks=1 00:06:29.435 --rc geninfo_unexecuted_blocks=1 00:06:29.435 00:06:29.435 ' 00:06:29.435 01:26:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.435 --rc genhtml_branch_coverage=1 00:06:29.435 --rc genhtml_function_coverage=1 00:06:29.435 --rc genhtml_legend=1 00:06:29.435 --rc geninfo_all_blocks=1 00:06:29.435 --rc geninfo_unexecuted_blocks=1 00:06:29.435 00:06:29.435 ' 00:06:29.435 01:26:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.436 01:26:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.436 01:26:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.436 01:26:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.436 01:26:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.436 01:26:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.436 01:26:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.436 ************************************ 00:06:29.436 START TEST default_locks 00:06:29.436 ************************************ 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58738 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58738 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58738 ']' 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.436 01:26:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.695 [2024-11-17 01:26:37.958730] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:29.695 [2024-11-17 01:26:37.958927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58738 ] 00:06:29.695 [2024-11-17 01:26:38.131268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.957 [2024-11-17 01:26:38.245531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.915 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.915 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:30.915 01:26:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58738 00:06:30.915 01:26:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.915 01:26:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58738 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58738 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58738 ']' 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58738 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58738 00:06:31.175 killing process with pid 58738 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58738' 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58738 00:06:31.175 01:26:39 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58738 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58738 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58738 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58738 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58738 ']' 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.716 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58738) - No such process 00:06:33.716 ERROR: process (pid: 58738) is no longer running 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.716 00:06:33.716 real 0m3.889s 00:06:33.716 user 0m3.811s 00:06:33.716 sys 0m0.593s 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.716 01:26:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.716 ************************************ 00:06:33.716 END TEST default_locks 00:06:33.716 ************************************ 00:06:33.716 01:26:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:33.716 01:26:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.716 01:26:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.716 01:26:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.716 ************************************ 00:06:33.716 START TEST default_locks_via_rpc 00:06:33.716 ************************************ 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58813 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58813 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58813 ']' 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.716 01:26:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.716 [2024-11-17 01:26:41.907132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:33.717 [2024-11-17 01:26:41.907707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58813 ] 00:06:33.717 [2024-11-17 01:26:42.091043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.976 [2024-11-17 01:26:42.198301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58813 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58813 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58813 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58813 ']' 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58813 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58813 00:06:34.915 killing process with pid 58813 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58813' 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58813 00:06:34.915 01:26:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58813 00:06:37.455 ************************************ 00:06:37.455 END TEST default_locks_via_rpc 00:06:37.455 ************************************ 00:06:37.455 00:06:37.455 real 0m3.812s 00:06:37.455 user 0m3.693s 00:06:37.455 sys 0m0.601s 00:06:37.455 01:26:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.455 01:26:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.455 01:26:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:37.455 01:26:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.455 01:26:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.455 01:26:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.455 ************************************ 00:06:37.455 START TEST non_locking_app_on_locked_coremask 00:06:37.455 ************************************ 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58882 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58882 /var/tmp/spdk.sock 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58882 ']' 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.455 01:26:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.455 [2024-11-17 01:26:45.789814] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:37.455 [2024-11-17 01:26:45.789922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58882 ] 00:06:37.715 [2024-11-17 01:26:45.961017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.715 [2024-11-17 01:26:46.069508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.654 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.654 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.654 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58902 00:06:38.654 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.654 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58902 /var/tmp/spdk2.sock 00:06:38.654 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58902 ']' 00:06:38.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.655 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.655 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.655 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.655 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.655 01:26:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.655 [2024-11-17 01:26:47.013261] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:38.655 [2024-11-17 01:26:47.013372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58902 ] 00:06:38.914 [2024-11-17 01:26:47.181979] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.914 [2024-11-17 01:26:47.182046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.173 [2024-11-17 01:26:47.404255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58882 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58882 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58882 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58882 ']' 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58882 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58882 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.716 killing process with pid 58882 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58882' 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58882 00:06:41.716 01:26:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58882 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58902 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58902 ']' 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58902 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58902 00:06:45.972 killing process with pid 58902 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58902' 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58902 00:06:45.972 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58902 00:06:48.513 00:06:48.513 real 0m10.973s 00:06:48.513 user 0m11.199s 00:06:48.513 sys 0m1.128s 00:06:48.513 01:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.513 ************************************ 00:06:48.513 END TEST non_locking_app_on_locked_coremask 00:06:48.513 ************************************ 00:06:48.513 01:26:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.513 01:26:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.513 01:26:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.513 01:26:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.513 01:26:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.513 ************************************ 00:06:48.513 START TEST locking_app_on_unlocked_coremask 00:06:48.513 ************************************ 00:06:48.513 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:48.513 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.513 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59048 00:06:48.513 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59048 /var/tmp/spdk.sock 00:06:48.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.513 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59048 ']' 00:06:48.514 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.514 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.514 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.514 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.514 01:26:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.514 [2024-11-17 01:26:56.820691] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:48.514 [2024-11-17 01:26:56.820842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:06:48.773 [2024-11-17 01:26:56.994875] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.773 [2024-11-17 01:26:56.994924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.773 [2024-11-17 01:26:57.100966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59064 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59064 /var/tmp/spdk2.sock 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59064 ']' 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.712 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.713 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.713 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.713 01:26:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.713 [2024-11-17 01:26:58.045511] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:49.713 [2024-11-17 01:26:58.045656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59064 ] 00:06:49.972 [2024-11-17 01:26:58.212749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.230 [2024-11-17 01:26:58.450224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.168 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.169 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.169 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59064 00:06:52.169 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59064 00:06:52.169 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59048 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59048 ']' 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59048 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59048 00:06:53.109 killing process with pid 59048 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59048' 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59048 00:06:53.109 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59048 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59064 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59064 ']' 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59064 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59064 00:06:58.404 killing process with pid 59064 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59064' 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59064 00:06:58.404 01:27:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59064 00:07:00.309 ************************************ 00:07:00.309 END TEST locking_app_on_unlocked_coremask 00:07:00.309 ************************************ 00:07:00.309 00:07:00.309 real 0m11.622s 00:07:00.309 user 0m11.828s 00:07:00.309 sys 0m1.404s 00:07:00.309 01:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.309 01:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.309 01:27:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:00.310 01:27:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.310 01:27:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.310 01:27:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.310 ************************************ 00:07:00.310 START TEST locking_app_on_locked_coremask 00:07:00.310 ************************************ 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59214 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59214 /var/tmp/spdk.sock 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59214 ']' 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.310 01:27:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.310 [2024-11-17 01:27:08.512267] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:00.310 [2024-11-17 01:27:08.512957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:07:00.310 [2024-11-17 01:27:08.687139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.570 [2024-11-17 01:27:08.805094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59232 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59232 /var/tmp/spdk2.sock 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59232 /var/tmp/spdk2.sock 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59232 /var/tmp/spdk2.sock 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59232 ']' 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.505 01:27:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.505 [2024-11-17 01:27:09.764853] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:01.505 [2024-11-17 01:27:09.765099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59232 ] 00:07:01.506 [2024-11-17 01:27:09.933907] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59214 has claimed it. 00:07:01.506 [2024-11-17 01:27:09.933971] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.073 ERROR: process (pid: 59232) is no longer running 00:07:02.073 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59232) - No such process 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59214 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59214 00:07:02.073 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59214 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59214 ']' 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59214 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59214 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.332 killing process with pid 59214 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59214' 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59214 00:07:02.332 01:27:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59214 00:07:04.916 00:07:04.916 real 0m4.790s 00:07:04.916 user 0m4.939s 00:07:04.916 sys 0m0.703s 00:07:04.916 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.916 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.916 ************************************ 00:07:04.916 END TEST locking_app_on_locked_coremask 00:07:04.916 ************************************ 00:07:04.916 01:27:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.916 01:27:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.916 01:27:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.916 01:27:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.916 ************************************ 00:07:04.916 START TEST locking_overlapped_coremask 00:07:04.916 ************************************ 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59302 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59302 /var/tmp/spdk.sock 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59302 ']' 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.916 01:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.176 [2024-11-17 01:27:13.386153] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:05.176 [2024-11-17 01:27:13.386281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:07:05.176 [2024-11-17 01:27:13.568376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.435 [2024-11-17 01:27:13.710094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.435 [2024-11-17 01:27:13.710253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.435 [2024-11-17 01:27:13.710308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59325 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59325 /var/tmp/spdk2.sock 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59325 /var/tmp/spdk2.sock 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59325 /var/tmp/spdk2.sock 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59325 ']' 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.371 01:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.371 [2024-11-17 01:27:14.780160] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:06.371 [2024-11-17 01:27:14.780359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59325 ] 00:07:06.630 [2024-11-17 01:27:14.946737] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59302 has claimed it. 00:07:06.630 [2024-11-17 01:27:14.950792] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.197 ERROR: process (pid: 59325) is no longer running 00:07:07.197 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59325) - No such process 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59302 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59302 ']' 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59302 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59302 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59302' 00:07:07.197 killing process with pid 59302 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59302 00:07:07.197 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59302 00:07:09.733 00:07:09.733 real 0m4.761s 00:07:09.733 user 0m12.782s 00:07:09.733 sys 0m0.719s 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.733 ************************************ 00:07:09.733 END TEST locking_overlapped_coremask 00:07:09.733 ************************************ 00:07:09.733 01:27:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.733 01:27:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.733 01:27:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.733 01:27:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.733 ************************************ 00:07:09.733 START TEST locking_overlapped_coremask_via_rpc 00:07:09.733 ************************************ 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59395 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59395 /var/tmp/spdk.sock 00:07:09.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59395 ']' 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.733 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.993 [2024-11-17 01:27:18.208733] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:09.993 [2024-11-17 01:27:18.208951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59395 ] 00:07:09.993 [2024-11-17 01:27:18.383911] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.993 [2024-11-17 01:27:18.384083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.252 [2024-11-17 01:27:18.530464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.252 [2024-11-17 01:27:18.530620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.252 [2024-11-17 01:27:18.530663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59413 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59413 /var/tmp/spdk2.sock 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59413 ']' 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.188 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.188 [2024-11-17 01:27:19.627363] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:11.188 [2024-11-17 01:27:19.627576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:07:11.447 [2024-11-17 01:27:19.801172] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.447 [2024-11-17 01:27:19.801364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.705 [2024-11-17 01:27:20.040694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.705 [2024-11-17 01:27:20.044877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.705 [2024-11-17 01:27:20.044896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.291 [2024-11-17 01:27:22.223007] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59395 has claimed it. 00:07:14.291 request: 00:07:14.291 { 00:07:14.291 "method": "framework_enable_cpumask_locks", 00:07:14.291 "req_id": 1 00:07:14.291 } 00:07:14.291 Got JSON-RPC error response 00:07:14.291 response: 00:07:14.291 { 00:07:14.291 "code": -32603, 00:07:14.291 "message": "Failed to claim CPU core: 2" 00:07:14.291 } 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59395 /var/tmp/spdk.sock 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59395 ']' 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59413 /var/tmp/spdk2.sock 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59413 ']' 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.291 00:07:14.291 real 0m4.577s 00:07:14.291 user 0m1.322s 00:07:14.291 sys 0m0.203s 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.291 ************************************ 00:07:14.291 END TEST locking_overlapped_coremask_via_rpc 00:07:14.291 ************************************ 00:07:14.291 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.291 01:27:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:14.291 01:27:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59395 ]] 00:07:14.291 01:27:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59395 00:07:14.291 01:27:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59395 ']' 00:07:14.291 01:27:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59395 00:07:14.291 01:27:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:14.291 01:27:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.551 01:27:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59395 00:07:14.551 killing process with pid 59395 00:07:14.551 01:27:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.551 01:27:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.551 01:27:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59395' 00:07:14.551 01:27:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59395 00:07:14.551 01:27:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59395 00:07:17.085 01:27:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59413 ]] 00:07:17.085 01:27:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59413 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59413 ']' 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59413 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59413 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:17.085 killing process with pid 59413 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59413' 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59413 00:07:17.085 01:27:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59413 00:07:19.616 01:27:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.616 01:27:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:19.616 01:27:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59395 ]] 00:07:19.616 01:27:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59395 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59395 ']' 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59395 00:07:19.616 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59395) - No such process 00:07:19.616 Process with pid 59395 is not found 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59395 is not found' 00:07:19.616 01:27:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59413 ]] 00:07:19.616 01:27:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59413 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59413 ']' 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59413 00:07:19.616 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59413) - No such process 00:07:19.616 Process with pid 59413 is not found 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59413 is not found' 00:07:19.616 01:27:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.616 00:07:19.616 real 0m50.346s 00:07:19.616 user 1m27.419s 00:07:19.616 sys 0m6.727s 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.616 01:27:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.616 ************************************ 00:07:19.616 END TEST cpu_locks 00:07:19.616 ************************************ 00:07:19.616 00:07:19.616 real 1m21.849s 00:07:19.616 user 2m30.143s 00:07:19.616 sys 0m10.700s 00:07:19.616 01:27:28 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.616 01:27:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.616 ************************************ 00:07:19.616 END TEST event 00:07:19.616 ************************************ 00:07:19.877 01:27:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.877 01:27:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.877 01:27:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.877 01:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:19.877 ************************************ 00:07:19.877 START TEST thread 00:07:19.877 ************************************ 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.877 * Looking for test storage... 00:07:19.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.877 01:27:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.877 01:27:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.877 01:27:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.877 01:27:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.877 01:27:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.877 01:27:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.877 01:27:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.877 01:27:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.877 01:27:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.877 01:27:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.877 01:27:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.877 01:27:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:19.877 01:27:28 thread -- scripts/common.sh@345 -- # : 1 00:07:19.877 01:27:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.877 01:27:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.877 01:27:28 thread -- scripts/common.sh@365 -- # decimal 1 00:07:19.877 01:27:28 thread -- scripts/common.sh@353 -- # local d=1 00:07:19.877 01:27:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.877 01:27:28 thread -- scripts/common.sh@355 -- # echo 1 00:07:19.877 01:27:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.877 01:27:28 thread -- scripts/common.sh@366 -- # decimal 2 00:07:19.877 01:27:28 thread -- scripts/common.sh@353 -- # local d=2 00:07:19.877 01:27:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.877 01:27:28 thread -- scripts/common.sh@355 -- # echo 2 00:07:19.877 01:27:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.877 01:27:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.877 01:27:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.877 01:27:28 thread -- scripts/common.sh@368 -- # return 0 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.877 --rc genhtml_branch_coverage=1 00:07:19.877 --rc genhtml_function_coverage=1 00:07:19.877 --rc genhtml_legend=1 00:07:19.877 --rc geninfo_all_blocks=1 00:07:19.877 --rc geninfo_unexecuted_blocks=1 00:07:19.877 00:07:19.877 ' 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.877 --rc genhtml_branch_coverage=1 00:07:19.877 --rc genhtml_function_coverage=1 00:07:19.877 --rc genhtml_legend=1 00:07:19.877 --rc geninfo_all_blocks=1 00:07:19.877 --rc geninfo_unexecuted_blocks=1 00:07:19.877 00:07:19.877 ' 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.877 --rc genhtml_branch_coverage=1 00:07:19.877 --rc genhtml_function_coverage=1 00:07:19.877 --rc genhtml_legend=1 00:07:19.877 --rc geninfo_all_blocks=1 00:07:19.877 --rc geninfo_unexecuted_blocks=1 00:07:19.877 00:07:19.877 ' 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.877 --rc genhtml_branch_coverage=1 00:07:19.877 --rc genhtml_function_coverage=1 00:07:19.877 --rc genhtml_legend=1 00:07:19.877 --rc geninfo_all_blocks=1 00:07:19.877 --rc geninfo_unexecuted_blocks=1 00:07:19.877 00:07:19.877 ' 00:07:19.877 01:27:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:19.877 01:27:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.878 01:27:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.878 ************************************ 00:07:19.878 START TEST thread_poller_perf 00:07:19.878 ************************************ 00:07:19.878 01:27:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.137 [2024-11-17 01:27:28.381063] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:20.137 [2024-11-17 01:27:28.381163] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59619 ] 00:07:20.137 [2024-11-17 01:27:28.554535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.396 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:20.396 [2024-11-17 01:27:28.687230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.781 [2024-11-17T01:27:30.241Z] ====================================== 00:07:21.781 [2024-11-17T01:27:30.241Z] busy:2300829784 (cyc) 00:07:21.781 [2024-11-17T01:27:30.241Z] total_run_count: 409000 00:07:21.781 [2024-11-17T01:27:30.241Z] tsc_hz: 2290000000 (cyc) 00:07:21.781 [2024-11-17T01:27:30.241Z] ====================================== 00:07:21.781 [2024-11-17T01:27:30.241Z] poller_cost: 5625 (cyc), 2456 (nsec) 00:07:21.781 00:07:21.781 real 0m1.600s 00:07:21.781 user 0m1.390s 00:07:21.781 sys 0m0.103s 00:07:21.781 01:27:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.781 01:27:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.781 ************************************ 00:07:21.781 END TEST thread_poller_perf 00:07:21.781 ************************************ 00:07:21.781 01:27:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.781 01:27:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:21.782 01:27:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.782 01:27:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.782 ************************************ 00:07:21.782 START TEST thread_poller_perf 00:07:21.782 ************************************ 00:07:21.782 01:27:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.782 [2024-11-17 01:27:30.055079] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:21.782 [2024-11-17 01:27:30.055206] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59650 ] 00:07:21.782 [2024-11-17 01:27:30.231880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.039 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:22.039 [2024-11-17 01:27:30.375258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.413 [2024-11-17T01:27:31.873Z] ====================================== 00:07:23.413 [2024-11-17T01:27:31.873Z] busy:2294437136 (cyc) 00:07:23.413 [2024-11-17T01:27:31.873Z] total_run_count: 5199000 00:07:23.413 [2024-11-17T01:27:31.873Z] tsc_hz: 2290000000 (cyc) 00:07:23.413 [2024-11-17T01:27:31.873Z] ====================================== 00:07:23.413 [2024-11-17T01:27:31.873Z] poller_cost: 441 (cyc), 192 (nsec) 00:07:23.413 00:07:23.413 real 0m1.585s 00:07:23.413 user 0m1.365s 00:07:23.413 sys 0m0.112s 00:07:23.413 01:27:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.413 01:27:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.413 ************************************ 00:07:23.413 END TEST thread_poller_perf 00:07:23.413 ************************************ 00:07:23.413 01:27:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.413 ************************************ 00:07:23.413 END TEST thread 00:07:23.413 ************************************ 00:07:23.413 00:07:23.413 real 0m3.545s 00:07:23.413 user 0m2.922s 00:07:23.413 sys 0m0.425s 00:07:23.413 01:27:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.413 01:27:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.413 01:27:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:23.413 01:27:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.413 01:27:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.413 01:27:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.413 01:27:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.413 ************************************ 00:07:23.413 START TEST app_cmdline 00:07:23.413 ************************************ 00:07:23.413 01:27:31 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.413 * Looking for test storage... 00:07:23.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.413 01:27:31 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.413 01:27:31 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.413 01:27:31 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.672 01:27:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.672 --rc genhtml_branch_coverage=1 00:07:23.672 --rc genhtml_function_coverage=1 00:07:23.672 --rc genhtml_legend=1 00:07:23.672 --rc geninfo_all_blocks=1 00:07:23.672 --rc geninfo_unexecuted_blocks=1 00:07:23.672 00:07:23.672 ' 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.672 --rc genhtml_branch_coverage=1 00:07:23.672 --rc genhtml_function_coverage=1 00:07:23.672 --rc genhtml_legend=1 00:07:23.672 --rc geninfo_all_blocks=1 00:07:23.672 --rc geninfo_unexecuted_blocks=1 00:07:23.672 00:07:23.672 ' 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.672 --rc genhtml_branch_coverage=1 00:07:23.672 --rc genhtml_function_coverage=1 00:07:23.672 --rc genhtml_legend=1 00:07:23.672 --rc geninfo_all_blocks=1 00:07:23.672 --rc geninfo_unexecuted_blocks=1 00:07:23.672 00:07:23.672 ' 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.672 --rc genhtml_branch_coverage=1 00:07:23.672 --rc genhtml_function_coverage=1 00:07:23.672 --rc genhtml_legend=1 00:07:23.672 --rc geninfo_all_blocks=1 00:07:23.672 --rc geninfo_unexecuted_blocks=1 00:07:23.672 00:07:23.672 ' 00:07:23.672 01:27:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.672 01:27:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59739 00:07:23.672 01:27:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.672 01:27:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59739 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59739 ']' 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.672 01:27:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.672 [2024-11-17 01:27:32.024146] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:23.672 [2024-11-17 01:27:32.024255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:07:23.933 [2024-11-17 01:27:32.196274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.933 [2024-11-17 01:27:32.313065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.871 01:27:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.871 01:27:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:24.871 01:27:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:25.129 { 00:07:25.129 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:25.129 "fields": { 00:07:25.129 "major": 25, 00:07:25.129 "minor": 1, 00:07:25.129 "patch": 0, 00:07:25.130 "suffix": "-pre", 00:07:25.130 "commit": "83e8405e4" 00:07:25.130 } 00:07:25.130 } 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:25.130 01:27:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:25.130 01:27:33 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.388 request: 00:07:25.388 { 00:07:25.388 "method": "env_dpdk_get_mem_stats", 00:07:25.388 "req_id": 1 00:07:25.388 } 00:07:25.388 Got JSON-RPC error response 00:07:25.388 response: 00:07:25.388 { 00:07:25.388 "code": -32601, 00:07:25.388 "message": "Method not found" 00:07:25.388 } 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.388 01:27:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59739 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59739 ']' 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59739 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59739 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.388 killing process with pid 59739 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59739' 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 59739 00:07:25.388 01:27:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 59739 00:07:27.942 00:07:27.942 real 0m4.159s 00:07:27.942 user 0m4.390s 00:07:27.942 sys 0m0.571s 00:07:27.942 01:27:35 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.942 ************************************ 00:07:27.942 END TEST app_cmdline 00:07:27.942 ************************************ 00:07:27.942 01:27:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.942 01:27:35 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.942 01:27:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.942 01:27:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.942 01:27:35 -- common/autotest_common.sh@10 -- # set +x 00:07:27.942 ************************************ 00:07:27.942 START TEST version 00:07:27.942 ************************************ 00:07:27.942 01:27:35 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.942 * Looking for test storage... 00:07:27.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.942 01:27:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.942 01:27:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.942 01:27:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.942 01:27:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.942 01:27:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.942 01:27:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.942 01:27:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.942 01:27:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.942 01:27:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.942 01:27:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.942 01:27:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.942 01:27:36 version -- scripts/common.sh@344 -- # case "$op" in 00:07:27.942 01:27:36 version -- scripts/common.sh@345 -- # : 1 00:07:27.942 01:27:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.942 01:27:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.942 01:27:36 version -- scripts/common.sh@365 -- # decimal 1 00:07:27.942 01:27:36 version -- scripts/common.sh@353 -- # local d=1 00:07:27.942 01:27:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.942 01:27:36 version -- scripts/common.sh@355 -- # echo 1 00:07:27.942 01:27:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.942 01:27:36 version -- scripts/common.sh@366 -- # decimal 2 00:07:27.942 01:27:36 version -- scripts/common.sh@353 -- # local d=2 00:07:27.942 01:27:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.942 01:27:36 version -- scripts/common.sh@355 -- # echo 2 00:07:27.942 01:27:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.942 01:27:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.942 01:27:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.942 01:27:36 version -- scripts/common.sh@368 -- # return 0 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.942 --rc genhtml_branch_coverage=1 00:07:27.942 --rc genhtml_function_coverage=1 00:07:27.942 --rc genhtml_legend=1 00:07:27.942 --rc geninfo_all_blocks=1 00:07:27.942 --rc geninfo_unexecuted_blocks=1 00:07:27.942 00:07:27.942 ' 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.942 --rc genhtml_branch_coverage=1 00:07:27.942 --rc genhtml_function_coverage=1 00:07:27.942 --rc genhtml_legend=1 00:07:27.942 --rc geninfo_all_blocks=1 00:07:27.942 --rc geninfo_unexecuted_blocks=1 00:07:27.942 00:07:27.942 ' 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.942 --rc genhtml_branch_coverage=1 00:07:27.942 --rc genhtml_function_coverage=1 00:07:27.942 --rc genhtml_legend=1 00:07:27.942 --rc geninfo_all_blocks=1 00:07:27.942 --rc geninfo_unexecuted_blocks=1 00:07:27.942 00:07:27.942 ' 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.942 --rc genhtml_branch_coverage=1 00:07:27.942 --rc genhtml_function_coverage=1 00:07:27.942 --rc genhtml_legend=1 00:07:27.942 --rc geninfo_all_blocks=1 00:07:27.942 --rc geninfo_unexecuted_blocks=1 00:07:27.942 00:07:27.942 ' 00:07:27.942 01:27:36 version -- app/version.sh@17 -- # get_header_version major 00:07:27.942 01:27:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # cut -f2 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.942 01:27:36 version -- app/version.sh@17 -- # major=25 00:07:27.942 01:27:36 version -- app/version.sh@18 -- # get_header_version minor 00:07:27.942 01:27:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # cut -f2 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.942 01:27:36 version -- app/version.sh@18 -- # minor=1 00:07:27.942 01:27:36 version -- app/version.sh@19 -- # get_header_version patch 00:07:27.942 01:27:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # cut -f2 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.942 01:27:36 version -- app/version.sh@19 -- # patch=0 00:07:27.942 01:27:36 version -- app/version.sh@20 -- # get_header_version suffix 00:07:27.942 01:27:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # cut -f2 00:07:27.942 01:27:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.942 01:27:36 version -- app/version.sh@20 -- # suffix=-pre 00:07:27.942 01:27:36 version -- app/version.sh@22 -- # version=25.1 00:07:27.942 01:27:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:27.942 01:27:36 version -- app/version.sh@28 -- # version=25.1rc0 00:07:27.942 01:27:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:27.942 01:27:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:27.942 01:27:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:27.942 01:27:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:27.942 00:07:27.942 real 0m0.316s 00:07:27.942 user 0m0.191s 00:07:27.942 sys 0m0.181s 00:07:27.942 01:27:36 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.942 01:27:36 version -- common/autotest_common.sh@10 -- # set +x 00:07:27.942 ************************************ 00:07:27.942 END TEST version 00:07:27.942 ************************************ 00:07:27.942 01:27:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:27.942 01:27:36 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:27.942 01:27:36 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:27.942 01:27:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.942 01:27:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.942 01:27:36 -- common/autotest_common.sh@10 -- # set +x 00:07:27.942 ************************************ 00:07:27.942 START TEST bdev_raid 00:07:27.942 ************************************ 00:07:27.942 01:27:36 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:28.202 * Looking for test storage... 00:07:28.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.202 01:27:36 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.202 --rc genhtml_branch_coverage=1 00:07:28.202 --rc genhtml_function_coverage=1 00:07:28.202 --rc genhtml_legend=1 00:07:28.202 --rc geninfo_all_blocks=1 00:07:28.202 --rc geninfo_unexecuted_blocks=1 00:07:28.202 00:07:28.202 ' 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.202 --rc genhtml_branch_coverage=1 00:07:28.202 --rc genhtml_function_coverage=1 00:07:28.202 --rc genhtml_legend=1 00:07:28.202 --rc geninfo_all_blocks=1 00:07:28.202 --rc geninfo_unexecuted_blocks=1 00:07:28.202 00:07:28.202 ' 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.202 --rc genhtml_branch_coverage=1 00:07:28.202 --rc genhtml_function_coverage=1 00:07:28.202 --rc genhtml_legend=1 00:07:28.202 --rc geninfo_all_blocks=1 00:07:28.202 --rc geninfo_unexecuted_blocks=1 00:07:28.202 00:07:28.202 ' 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.202 --rc genhtml_branch_coverage=1 00:07:28.202 --rc genhtml_function_coverage=1 00:07:28.202 --rc genhtml_legend=1 00:07:28.202 --rc geninfo_all_blocks=1 00:07:28.202 --rc geninfo_unexecuted_blocks=1 00:07:28.202 00:07:28.202 ' 00:07:28.202 01:27:36 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:28.202 01:27:36 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:28.202 01:27:36 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:28.202 01:27:36 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:28.202 01:27:36 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:28.202 01:27:36 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:28.202 01:27:36 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.202 01:27:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.202 ************************************ 00:07:28.202 START TEST raid1_resize_data_offset_test 00:07:28.202 ************************************ 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59927 00:07:28.202 Process raid pid: 59927 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59927' 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59927 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59927 ']' 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.202 01:27:36 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.202 [2024-11-17 01:27:36.626245] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:28.202 [2024-11-17 01:27:36.626351] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.461 [2024-11-17 01:27:36.798183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.461 [2024-11-17 01:27:36.907834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.721 [2024-11-17 01:27:37.094023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.721 [2024-11-17 01:27:37.094061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.289 malloc0 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.289 malloc1 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.289 null0 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.289 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.290 [2024-11-17 01:27:37.616542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:29.290 [2024-11-17 01:27:37.618303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:29.290 [2024-11-17 01:27:37.618351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:29.290 [2024-11-17 01:27:37.618485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:29.290 [2024-11-17 01:27:37.618500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:29.290 [2024-11-17 01:27:37.618752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:29.290 [2024-11-17 01:27:37.618939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:29.290 [2024-11-17 01:27:37.618972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:29.290 [2024-11-17 01:27:37.619130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.290 [2024-11-17 01:27:37.676429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.290 01:27:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.856 malloc2 00:07:29.856 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.856 01:27:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:29.856 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.856 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.856 [2024-11-17 01:27:38.213051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:29.856 [2024-11-17 01:27:38.229920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.856 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.857 [2024-11-17 01:27:38.231615] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59927 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59927 ']' 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59927 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.857 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59927 00:07:30.116 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.116 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.116 killing process with pid 59927 00:07:30.116 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59927' 00:07:30.116 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59927 00:07:30.116 01:27:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59927 00:07:30.116 [2024-11-17 01:27:38.317624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.116 [2024-11-17 01:27:38.318617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:30.116 [2024-11-17 01:27:38.318683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.116 [2024-11-17 01:27:38.318703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:30.116 [2024-11-17 01:27:38.353466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.116 [2024-11-17 01:27:38.353769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.116 [2024-11-17 01:27:38.353792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:32.020 [2024-11-17 01:27:40.084616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.957 01:27:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:32.957 00:07:32.957 real 0m4.592s 00:07:32.957 user 0m4.501s 00:07:32.957 sys 0m0.507s 00:07:32.957 01:27:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.957 01:27:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.957 ************************************ 00:07:32.957 END TEST raid1_resize_data_offset_test 00:07:32.957 ************************************ 00:07:32.957 01:27:41 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:32.957 01:27:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.957 01:27:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.957 01:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.957 ************************************ 00:07:32.957 START TEST raid0_resize_superblock_test 00:07:32.957 ************************************ 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60010 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.957 Process raid pid: 60010 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60010' 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60010 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60010 ']' 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.957 01:27:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.957 [2024-11-17 01:27:41.315685] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:32.957 [2024-11-17 01:27:41.315855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.216 [2024-11-17 01:27:41.494956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.216 [2024-11-17 01:27:41.609843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.476 [2024-11-17 01:27:41.800582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.476 [2024-11-17 01:27:41.800621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.736 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.736 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.736 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:33.736 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.736 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.303 malloc0 00:07:34.303 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.303 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:34.303 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.303 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.303 [2024-11-17 01:27:42.679551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:34.304 [2024-11-17 01:27:42.679614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.304 [2024-11-17 01:27:42.679638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:34.304 [2024-11-17 01:27:42.679649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.304 [2024-11-17 01:27:42.681653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.304 [2024-11-17 01:27:42.681692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:34.304 pt0 00:07:34.304 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.304 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:34.304 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.304 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 e51faa84-1c92-41ea-81a5-cec7dcd2313d 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 5218db5c-547b-43df-b68d-3a8559465a0a 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 8abd75f0-1e21-4416-9f0e-fd36f58626e6 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 [2024-11-17 01:27:42.812255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5218db5c-547b-43df-b68d-3a8559465a0a is claimed 00:07:34.563 [2024-11-17 01:27:42.812343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8abd75f0-1e21-4416-9f0e-fd36f58626e6 is claimed 00:07:34.563 [2024-11-17 01:27:42.812468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:34.563 [2024-11-17 01:27:42.812483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:34.563 [2024-11-17 01:27:42.812720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:34.563 [2024-11-17 01:27:42.812917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:34.563 [2024-11-17 01:27:42.812935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:34.563 [2024-11-17 01:27:42.813081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:34.563 [2024-11-17 01:27:42.920232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 [2024-11-17 01:27:42.968118] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.563 [2024-11-17 01:27:42.968149] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5218db5c-547b-43df-b68d-3a8559465a0a' was resized: old size 131072, new size 204800 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 [2024-11-17 01:27:42.980043] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.563 [2024-11-17 01:27:42.980082] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8abd75f0-1e21-4416-9f0e-fd36f58626e6' was resized: old size 131072, new size 204800 00:07:34.563 [2024-11-17 01:27:42.980108] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.563 01:27:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.563 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.830 [2024-11-17 01:27:43.096005] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.830 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.830 [2024-11-17 01:27:43.139670] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:34.830 [2024-11-17 01:27:43.139738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:34.830 [2024-11-17 01:27:43.139750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.831 [2024-11-17 01:27:43.139784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:34.831 [2024-11-17 01:27:43.139888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.831 [2024-11-17 01:27:43.139919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.831 [2024-11-17 01:27:43.139930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.831 [2024-11-17 01:27:43.147580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:34.831 [2024-11-17 01:27:43.147632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.831 [2024-11-17 01:27:43.147651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:34.831 [2024-11-17 01:27:43.147662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.831 [2024-11-17 01:27:43.149700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.831 [2024-11-17 01:27:43.149738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:34.831 [2024-11-17 01:27:43.151349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5218db5c-547b-43df-b68d-3a8559465a0a 00:07:34.831 [2024-11-17 01:27:43.151420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5218db5c-547b-43df-b68d-3a8559465a0a is claimed 00:07:34.831 [2024-11-17 01:27:43.151548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8abd75f0-1e21-4416-9f0e-fd36f58626e6 00:07:34.831 [2024-11-17 01:27:43.151575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8abd75f0-1e21-4416-9f0e-fd36f58626e6 is claimed 00:07:34.831 [2024-11-17 01:27:43.151690] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 8abd75f0-1e21-4416-9f0e-fd36f58626e6 (2) smaller than existing raid bdev Raid (3) 00:07:34.831 [2024-11-17 01:27:43.151717] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5218db5c-547b-43df-b68d-3a8559465a0a: File exists 00:07:34.831 [2024-11-17 01:27:43.151750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:34.831 [2024-11-17 01:27:43.151779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:34.831 [2024-11-17 01:27:43.152010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:34.831 [2024-11-17 01:27:43.152171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:34.831 [2024-11-17 01:27:43.152184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:34.831 [2024-11-17 01:27:43.152369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.831 pt0 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.831 [2024-11-17 01:27:43.167954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60010 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60010 ']' 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60010 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60010 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.831 killing process with pid 60010 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60010' 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60010 00:07:34.831 [2024-11-17 01:27:43.243688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.831 [2024-11-17 01:27:43.243740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.831 [2024-11-17 01:27:43.243801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.831 [2024-11-17 01:27:43.243810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:34.831 01:27:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60010 00:07:36.223 [2024-11-17 01:27:44.603724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.605 01:27:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:37.605 00:07:37.605 real 0m4.462s 00:07:37.605 user 0m4.721s 00:07:37.605 sys 0m0.551s 00:07:37.605 01:27:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.605 01:27:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.605 ************************************ 00:07:37.605 END TEST raid0_resize_superblock_test 00:07:37.605 ************************************ 00:07:37.605 01:27:45 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:37.605 01:27:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.605 01:27:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.605 01:27:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.605 ************************************ 00:07:37.605 START TEST raid1_resize_superblock_test 00:07:37.605 ************************************ 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60103 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60103' 00:07:37.605 Process raid pid: 60103 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60103 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60103 ']' 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.605 01:27:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.605 [2024-11-17 01:27:45.826637] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:37.605 [2024-11-17 01:27:45.826818] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.605 [2024-11-17 01:27:46.008565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.865 [2024-11-17 01:27:46.120351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.865 [2024-11-17 01:27:46.310515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.865 [2024-11-17 01:27:46.310546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.435 01:27:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.435 01:27:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:38.436 01:27:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:38.436 01:27:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.436 01:27:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.696 malloc0 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.696 [2024-11-17 01:27:47.133575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:38.696 [2024-11-17 01:27:47.133634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.696 [2024-11-17 01:27:47.133656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:38.696 [2024-11-17 01:27:47.133667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.696 [2024-11-17 01:27:47.135641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.696 [2024-11-17 01:27:47.135680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:38.696 pt0 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.696 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 156557f9-b015-4999-93c0-d77789dc5115 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 e38f719b-2d37-43d0-b516-57eb3a8502d9 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 d1c370da-81b9-44f1-9bdb-acd7ed5b2424 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 [2024-11-17 01:27:47.265561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e38f719b-2d37-43d0-b516-57eb3a8502d9 is claimed 00:07:38.957 [2024-11-17 01:27:47.265662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d1c370da-81b9-44f1-9bdb-acd7ed5b2424 is claimed 00:07:38.957 [2024-11-17 01:27:47.265809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:38.957 [2024-11-17 01:27:47.265827] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:38.957 [2024-11-17 01:27:47.266053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:38.957 [2024-11-17 01:27:47.266234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:38.957 [2024-11-17 01:27:47.266245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:38.957 [2024-11-17 01:27:47.266391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 [2024-11-17 01:27:47.377541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.957 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.958 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.958 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:38.958 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:38.958 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.218 [2024-11-17 01:27:47.421400] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:39.218 [2024-11-17 01:27:47.421426] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e38f719b-2d37-43d0-b516-57eb3a8502d9' was resized: old size 131072, new size 204800 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.218 [2024-11-17 01:27:47.433346] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:39.218 [2024-11-17 01:27:47.433372] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd1c370da-81b9-44f1-9bdb-acd7ed5b2424' was resized: old size 131072, new size 204800 00:07:39.218 [2024-11-17 01:27:47.433392] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.218 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:39.219 [2024-11-17 01:27:47.537252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.219 [2024-11-17 01:27:47.584972] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:39.219 [2024-11-17 01:27:47.585031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:39.219 [2024-11-17 01:27:47.585054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:39.219 [2024-11-17 01:27:47.585189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.219 [2024-11-17 01:27:47.585335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.219 [2024-11-17 01:27:47.585394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.219 [2024-11-17 01:27:47.585407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.219 [2024-11-17 01:27:47.596912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:39.219 [2024-11-17 01:27:47.596959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.219 [2024-11-17 01:27:47.596976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:39.219 [2024-11-17 01:27:47.596988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.219 [2024-11-17 01:27:47.598948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.219 [2024-11-17 01:27:47.598988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:39.219 [2024-11-17 01:27:47.600476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e38f719b-2d37-43d0-b516-57eb3a8502d9 00:07:39.219 [2024-11-17 01:27:47.600547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e38f719b-2d37-43d0-b516-57eb3a8502d9 is claimed 00:07:39.219 [2024-11-17 01:27:47.600656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d1c370da-81b9-44f1-9bdb-acd7ed5b2424 00:07:39.219 [2024-11-17 01:27:47.600684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d1c370da-81b9-44f1-9bdb-acd7ed5b2424 is claimed 00:07:39.219 [2024-11-17 01:27:47.600814] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d1c370da-81b9-44f1-9bdb-acd7ed5b2424 (2) smaller than existing raid bdev Raid (3) 00:07:39.219 [2024-11-17 01:27:47.600833] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e38f719b-2d37-43d0-b516-57eb3a8502d9: File exists 00:07:39.219 [2024-11-17 01:27:47.600870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:39.219 [2024-11-17 01:27:47.600881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:39.219 [2024-11-17 01:27:47.601098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:39.219 [2024-11-17 01:27:47.601238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:39.219 [2024-11-17 01:27:47.601245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:39.219 [2024-11-17 01:27:47.601413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.219 pt0 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:39.219 [2024-11-17 01:27:47.621512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60103 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60103 ']' 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60103 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.219 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60103 00:07:39.480 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.480 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.480 killing process with pid 60103 00:07:39.480 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60103' 00:07:39.480 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60103 00:07:39.480 [2024-11-17 01:27:47.704233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.480 [2024-11-17 01:27:47.704287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.480 [2024-11-17 01:27:47.704336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.480 [2024-11-17 01:27:47.704346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:39.480 01:27:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60103 00:07:40.862 [2024-11-17 01:27:49.057072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.802 01:27:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:41.802 00:07:41.802 real 0m4.378s 00:07:41.802 user 0m4.558s 00:07:41.802 sys 0m0.575s 00:07:41.802 01:27:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.802 01:27:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.802 ************************************ 00:07:41.802 END TEST raid1_resize_superblock_test 00:07:41.802 ************************************ 00:07:41.802 01:27:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:41.802 01:27:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:41.802 01:27:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:41.802 01:27:50 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:41.802 01:27:50 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:41.802 01:27:50 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:41.802 01:27:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.802 01:27:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.802 01:27:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.802 ************************************ 00:07:41.802 START TEST raid_function_test_raid0 00:07:41.802 ************************************ 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60208 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60208' 00:07:41.802 Process raid pid: 60208 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60208 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60208 ']' 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.802 01:27:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:42.062 [2024-11-17 01:27:50.290700] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:42.062 [2024-11-17 01:27:50.290822] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.062 [2024-11-17 01:27:50.461339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.321 [2024-11-17 01:27:50.571003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.321 [2024-11-17 01:27:50.774529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.321 [2024-11-17 01:27:50.774588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:42.890 Base_1 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:42.890 Base_2 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:42.890 [2024-11-17 01:27:51.215454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:42.890 [2024-11-17 01:27:51.217242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:42.890 [2024-11-17 01:27:51.217320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:42.890 [2024-11-17 01:27:51.217332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:42.890 [2024-11-17 01:27:51.217605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.890 [2024-11-17 01:27:51.217774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:42.890 [2024-11-17 01:27:51.217787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:42.890 [2024-11-17 01:27:51.217957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.890 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:42.891 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:43.155 [2024-11-17 01:27:51.439121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:43.155 /dev/nbd0 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.155 1+0 records in 00:07:43.155 1+0 records out 00:07:43.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419744 s, 9.8 MB/s 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.155 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:43.420 { 00:07:43.420 "nbd_device": "/dev/nbd0", 00:07:43.420 "bdev_name": "raid" 00:07:43.420 } 00:07:43.420 ]' 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:43.420 { 00:07:43.420 "nbd_device": "/dev/nbd0", 00:07:43.420 "bdev_name": "raid" 00:07:43.420 } 00:07:43.420 ]' 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:43.420 4096+0 records in 00:07:43.420 4096+0 records out 00:07:43.420 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0271115 s, 77.4 MB/s 00:07:43.420 01:27:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:43.681 4096+0 records in 00:07:43.681 4096+0 records out 00:07:43.681 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.18563 s, 11.3 MB/s 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:43.681 128+0 records in 00:07:43.681 128+0 records out 00:07:43.681 65536 bytes (66 kB, 64 KiB) copied, 0.00119298 s, 54.9 MB/s 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:43.681 2035+0 records in 00:07:43.681 2035+0 records out 00:07:43.681 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.015372 s, 67.8 MB/s 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:43.681 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:43.941 456+0 records in 00:07:43.941 456+0 records out 00:07:43.941 233472 bytes (233 kB, 228 KiB) copied, 0.00383806 s, 60.8 MB/s 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.941 [2024-11-17 01:27:52.380221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.941 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:44.201 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:44.202 01:27:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60208 00:07:44.202 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60208 ']' 00:07:44.202 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60208 00:07:44.202 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:44.202 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.202 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60208 00:07:44.461 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.461 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.461 killing process with pid 60208 00:07:44.461 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60208' 00:07:44.461 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60208 00:07:44.461 [2024-11-17 01:27:52.683558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.461 [2024-11-17 01:27:52.683678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.461 01:27:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60208 00:07:44.461 [2024-11-17 01:27:52.683729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.461 [2024-11-17 01:27:52.683749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:44.461 [2024-11-17 01:27:52.889495] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.843 01:27:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:45.843 00:07:45.843 real 0m3.737s 00:07:45.843 user 0m4.320s 00:07:45.843 sys 0m0.975s 00:07:45.843 01:27:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.843 01:27:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:45.843 ************************************ 00:07:45.843 END TEST raid_function_test_raid0 00:07:45.843 ************************************ 00:07:45.843 01:27:53 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:45.843 01:27:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.843 01:27:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.843 01:27:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.843 ************************************ 00:07:45.843 START TEST raid_function_test_concat 00:07:45.843 ************************************ 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60331 00:07:45.843 Process raid pid: 60331 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60331' 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60331 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60331 ']' 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.843 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:45.843 [2024-11-17 01:27:54.095198] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:45.843 [2024-11-17 01:27:54.095308] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.843 [2024-11-17 01:27:54.268613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.104 [2024-11-17 01:27:54.373694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.104 [2024-11-17 01:27:54.553927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.104 [2024-11-17 01:27:54.553967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:46.674 Base_1 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.674 01:27:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:46.674 Base_2 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:46.674 [2024-11-17 01:27:55.008851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:46.674 [2024-11-17 01:27:55.010518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:46.674 [2024-11-17 01:27:55.010592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:46.674 [2024-11-17 01:27:55.010604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:46.674 [2024-11-17 01:27:55.010865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.674 [2024-11-17 01:27:55.011020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:46.674 [2024-11-17 01:27:55.011032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:46.674 [2024-11-17 01:27:55.011169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:46.674 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:46.934 [2024-11-17 01:27:55.244482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:46.934 /dev/nbd0 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.934 1+0 records in 00:07:46.934 1+0 records out 00:07:46.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386189 s, 10.6 MB/s 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:46.934 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:47.194 { 00:07:47.194 "nbd_device": "/dev/nbd0", 00:07:47.194 "bdev_name": "raid" 00:07:47.194 } 00:07:47.194 ]' 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:47.194 { 00:07:47.194 "nbd_device": "/dev/nbd0", 00:07:47.194 "bdev_name": "raid" 00:07:47.194 } 00:07:47.194 ]' 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:47.194 4096+0 records in 00:07:47.194 4096+0 records out 00:07:47.194 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0325971 s, 64.3 MB/s 00:07:47.194 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:47.453 4096+0 records in 00:07:47.453 4096+0 records out 00:07:47.453 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.24083 s, 8.7 MB/s 00:07:47.453 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:47.454 128+0 records in 00:07:47.454 128+0 records out 00:07:47.454 65536 bytes (66 kB, 64 KiB) copied, 0.00115842 s, 56.6 MB/s 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:47.454 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:47.714 2035+0 records in 00:07:47.714 2035+0 records out 00:07:47.714 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0146313 s, 71.2 MB/s 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:47.714 456+0 records in 00:07:47.714 456+0 records out 00:07:47.714 233472 bytes (233 kB, 228 KiB) copied, 0.00360911 s, 64.7 MB/s 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.714 01:27:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:47.974 [2024-11-17 01:27:56.201671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.974 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60331 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60331 ']' 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60331 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60331 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.234 killing process with pid 60331 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60331' 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60331 00:07:48.234 [2024-11-17 01:27:56.524583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.234 [2024-11-17 01:27:56.524687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.234 [2024-11-17 01:27:56.524744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.234 01:27:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60331 00:07:48.234 [2024-11-17 01:27:56.524767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:48.494 [2024-11-17 01:27:56.719705] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.434 01:27:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:49.434 00:07:49.434 real 0m3.760s 00:07:49.434 user 0m4.313s 00:07:49.434 sys 0m0.999s 00:07:49.434 01:27:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.434 01:27:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:49.434 ************************************ 00:07:49.434 END TEST raid_function_test_concat 00:07:49.434 ************************************ 00:07:49.434 01:27:57 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:49.434 01:27:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.434 01:27:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.434 01:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.434 ************************************ 00:07:49.434 START TEST raid0_resize_test 00:07:49.434 ************************************ 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60454 00:07:49.434 Process raid pid: 60454 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60454' 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60454 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60454 ']' 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.434 01:27:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.694 [2024-11-17 01:27:57.924171] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:49.694 [2024-11-17 01:27:57.924290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.694 [2024-11-17 01:27:58.099893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.954 [2024-11-17 01:27:58.208574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.954 [2024-11-17 01:27:58.389485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.954 [2024-11-17 01:27:58.389524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.524 Base_1 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.524 Base_2 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.524 [2024-11-17 01:27:58.757048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:50.524 [2024-11-17 01:27:58.758784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:50.524 [2024-11-17 01:27:58.758840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:50.524 [2024-11-17 01:27:58.758852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:50.524 [2024-11-17 01:27:58.759081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:50.524 [2024-11-17 01:27:58.759197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:50.524 [2024-11-17 01:27:58.759214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:50.524 [2024-11-17 01:27:58.759355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.524 [2024-11-17 01:27:58.765010] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:50.524 [2024-11-17 01:27:58.765039] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:50.524 true 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.524 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.525 [2024-11-17 01:27:58.777165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.525 [2024-11-17 01:27:58.828881] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:50.525 [2024-11-17 01:27:58.828907] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:50.525 [2024-11-17 01:27:58.828926] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:50.525 true 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:50.525 [2024-11-17 01:27:58.841022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60454 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60454 ']' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60454 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60454 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.525 killing process with pid 60454 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60454' 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60454 00:07:50.525 [2024-11-17 01:27:58.927626] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.525 [2024-11-17 01:27:58.927694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.525 [2024-11-17 01:27:58.927731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.525 [2024-11-17 01:27:58.927739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:50.525 01:27:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60454 00:07:50.525 [2024-11-17 01:27:58.945197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.908 01:27:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:51.908 00:07:51.908 real 0m2.146s 00:07:51.908 user 0m2.261s 00:07:51.908 sys 0m0.342s 00:07:51.908 01:27:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.908 01:27:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.908 ************************************ 00:07:51.908 END TEST raid0_resize_test 00:07:51.908 ************************************ 00:07:51.908 01:28:00 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:51.908 01:28:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.908 01:28:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.908 01:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.908 ************************************ 00:07:51.908 START TEST raid1_resize_test 00:07:51.908 ************************************ 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60510 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:51.908 Process raid pid: 60510 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60510' 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60510 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60510 ']' 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.908 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.908 [2024-11-17 01:28:00.133136] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:51.908 [2024-11-17 01:28:00.133258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.908 [2024-11-17 01:28:00.289754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.167 [2024-11-17 01:28:00.398281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.167 [2024-11-17 01:28:00.598203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.167 [2024-11-17 01:28:00.598244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.738 Base_1 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.738 Base_2 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.738 [2024-11-17 01:28:00.985061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:52.738 [2024-11-17 01:28:00.986744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:52.738 [2024-11-17 01:28:00.986816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:52.738 [2024-11-17 01:28:00.986828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:52.738 [2024-11-17 01:28:00.987072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:52.738 [2024-11-17 01:28:00.987200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:52.738 [2024-11-17 01:28:00.987213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:52.738 [2024-11-17 01:28:00.987334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.738 01:28:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.738 [2024-11-17 01:28:00.997022] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:52.738 [2024-11-17 01:28:00.997055] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:52.738 true 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.738 [2024-11-17 01:28:01.013149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.738 [2024-11-17 01:28:01.060898] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:52.738 [2024-11-17 01:28:01.060924] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:52.738 [2024-11-17 01:28:01.060943] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:52.738 true 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.738 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:52.739 [2024-11-17 01:28:01.073042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60510 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60510 ']' 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60510 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60510 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.739 killing process with pid 60510 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60510' 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60510 00:07:52.739 [2024-11-17 01:28:01.160477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.739 [2024-11-17 01:28:01.160552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.739 01:28:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60510 00:07:52.739 [2024-11-17 01:28:01.161017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.739 [2024-11-17 01:28:01.161043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:52.739 [2024-11-17 01:28:01.178349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.122 01:28:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:54.122 00:07:54.122 real 0m2.178s 00:07:54.122 user 0m2.318s 00:07:54.122 sys 0m0.321s 00:07:54.122 01:28:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.122 01:28:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 ************************************ 00:07:54.122 END TEST raid1_resize_test 00:07:54.122 ************************************ 00:07:54.122 01:28:02 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:54.122 01:28:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:54.122 01:28:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:54.122 01:28:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.122 01:28:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.122 01:28:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 ************************************ 00:07:54.122 START TEST raid_state_function_test 00:07:54.122 ************************************ 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60573 00:07:54.122 Process raid pid: 60573 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60573' 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60573 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60573 ']' 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.122 01:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.122 [2024-11-17 01:28:02.386695] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:54.122 [2024-11-17 01:28:02.386802] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.122 [2024-11-17 01:28:02.560341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.382 [2024-11-17 01:28:02.671363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.642 [2024-11-17 01:28:02.852189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.642 [2024-11-17 01:28:02.852235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.902 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.902 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.902 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.902 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.902 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.902 [2024-11-17 01:28:03.214041] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.902 [2024-11-17 01:28:03.214091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.902 [2024-11-17 01:28:03.214104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.902 [2024-11-17 01:28:03.214113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.903 "name": "Existed_Raid", 00:07:54.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.903 "strip_size_kb": 64, 00:07:54.903 "state": "configuring", 00:07:54.903 "raid_level": "raid0", 00:07:54.903 "superblock": false, 00:07:54.903 "num_base_bdevs": 2, 00:07:54.903 "num_base_bdevs_discovered": 0, 00:07:54.903 "num_base_bdevs_operational": 2, 00:07:54.903 "base_bdevs_list": [ 00:07:54.903 { 00:07:54.903 "name": "BaseBdev1", 00:07:54.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.903 "is_configured": false, 00:07:54.903 "data_offset": 0, 00:07:54.903 "data_size": 0 00:07:54.903 }, 00:07:54.903 { 00:07:54.903 "name": "BaseBdev2", 00:07:54.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.903 "is_configured": false, 00:07:54.903 "data_offset": 0, 00:07:54.903 "data_size": 0 00:07:54.903 } 00:07:54.903 ] 00:07:54.903 }' 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.903 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.473 [2024-11-17 01:28:03.637315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.473 [2024-11-17 01:28:03.637353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.473 [2024-11-17 01:28:03.645283] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.473 [2024-11-17 01:28:03.645321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.473 [2024-11-17 01:28:03.645329] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.473 [2024-11-17 01:28:03.645340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.473 [2024-11-17 01:28:03.686025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.473 BaseBdev1 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.473 [ 00:07:55.473 { 00:07:55.473 "name": "BaseBdev1", 00:07:55.473 "aliases": [ 00:07:55.473 "57ec3db9-60bf-43f9-a132-d3ca8e4ba60e" 00:07:55.473 ], 00:07:55.473 "product_name": "Malloc disk", 00:07:55.473 "block_size": 512, 00:07:55.473 "num_blocks": 65536, 00:07:55.473 "uuid": "57ec3db9-60bf-43f9-a132-d3ca8e4ba60e", 00:07:55.473 "assigned_rate_limits": { 00:07:55.473 "rw_ios_per_sec": 0, 00:07:55.473 "rw_mbytes_per_sec": 0, 00:07:55.473 "r_mbytes_per_sec": 0, 00:07:55.473 "w_mbytes_per_sec": 0 00:07:55.473 }, 00:07:55.473 "claimed": true, 00:07:55.473 "claim_type": "exclusive_write", 00:07:55.473 "zoned": false, 00:07:55.473 "supported_io_types": { 00:07:55.473 "read": true, 00:07:55.473 "write": true, 00:07:55.473 "unmap": true, 00:07:55.473 "flush": true, 00:07:55.473 "reset": true, 00:07:55.473 "nvme_admin": false, 00:07:55.473 "nvme_io": false, 00:07:55.473 "nvme_io_md": false, 00:07:55.473 "write_zeroes": true, 00:07:55.473 "zcopy": true, 00:07:55.473 "get_zone_info": false, 00:07:55.473 "zone_management": false, 00:07:55.473 "zone_append": false, 00:07:55.473 "compare": false, 00:07:55.473 "compare_and_write": false, 00:07:55.473 "abort": true, 00:07:55.473 "seek_hole": false, 00:07:55.473 "seek_data": false, 00:07:55.473 "copy": true, 00:07:55.473 "nvme_iov_md": false 00:07:55.473 }, 00:07:55.473 "memory_domains": [ 00:07:55.473 { 00:07:55.473 "dma_device_id": "system", 00:07:55.473 "dma_device_type": 1 00:07:55.473 }, 00:07:55.473 { 00:07:55.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.473 "dma_device_type": 2 00:07:55.473 } 00:07:55.473 ], 00:07:55.473 "driver_specific": {} 00:07:55.473 } 00:07:55.473 ] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.473 "name": "Existed_Raid", 00:07:55.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.473 "strip_size_kb": 64, 00:07:55.473 "state": "configuring", 00:07:55.473 "raid_level": "raid0", 00:07:55.473 "superblock": false, 00:07:55.473 "num_base_bdevs": 2, 00:07:55.473 "num_base_bdevs_discovered": 1, 00:07:55.473 "num_base_bdevs_operational": 2, 00:07:55.473 "base_bdevs_list": [ 00:07:55.473 { 00:07:55.473 "name": "BaseBdev1", 00:07:55.473 "uuid": "57ec3db9-60bf-43f9-a132-d3ca8e4ba60e", 00:07:55.473 "is_configured": true, 00:07:55.473 "data_offset": 0, 00:07:55.473 "data_size": 65536 00:07:55.473 }, 00:07:55.473 { 00:07:55.473 "name": "BaseBdev2", 00:07:55.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.473 "is_configured": false, 00:07:55.473 "data_offset": 0, 00:07:55.473 "data_size": 0 00:07:55.473 } 00:07:55.473 ] 00:07:55.473 }' 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.473 01:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.733 [2024-11-17 01:28:04.161257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.733 [2024-11-17 01:28:04.161314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.733 [2024-11-17 01:28:04.173271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.733 [2024-11-17 01:28:04.175000] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.733 [2024-11-17 01:28:04.175047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.733 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.993 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.993 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.993 "name": "Existed_Raid", 00:07:55.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.993 "strip_size_kb": 64, 00:07:55.993 "state": "configuring", 00:07:55.993 "raid_level": "raid0", 00:07:55.993 "superblock": false, 00:07:55.993 "num_base_bdevs": 2, 00:07:55.993 "num_base_bdevs_discovered": 1, 00:07:55.993 "num_base_bdevs_operational": 2, 00:07:55.993 "base_bdevs_list": [ 00:07:55.993 { 00:07:55.993 "name": "BaseBdev1", 00:07:55.993 "uuid": "57ec3db9-60bf-43f9-a132-d3ca8e4ba60e", 00:07:55.993 "is_configured": true, 00:07:55.993 "data_offset": 0, 00:07:55.993 "data_size": 65536 00:07:55.993 }, 00:07:55.993 { 00:07:55.993 "name": "BaseBdev2", 00:07:55.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.993 "is_configured": false, 00:07:55.993 "data_offset": 0, 00:07:55.993 "data_size": 0 00:07:55.993 } 00:07:55.993 ] 00:07:55.993 }' 00:07:55.993 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.993 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.252 [2024-11-17 01:28:04.658109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.252 [2024-11-17 01:28:04.658154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.252 [2024-11-17 01:28:04.658179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:56.252 [2024-11-17 01:28:04.658449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.252 [2024-11-17 01:28:04.658611] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.252 [2024-11-17 01:28:04.658630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:56.252 [2024-11-17 01:28:04.658890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.252 BaseBdev2 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.252 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.252 [ 00:07:56.252 { 00:07:56.252 "name": "BaseBdev2", 00:07:56.252 "aliases": [ 00:07:56.252 "b982b26c-a78c-4f40-9ad3-8a54242d7fd1" 00:07:56.252 ], 00:07:56.252 "product_name": "Malloc disk", 00:07:56.252 "block_size": 512, 00:07:56.252 "num_blocks": 65536, 00:07:56.252 "uuid": "b982b26c-a78c-4f40-9ad3-8a54242d7fd1", 00:07:56.252 "assigned_rate_limits": { 00:07:56.252 "rw_ios_per_sec": 0, 00:07:56.252 "rw_mbytes_per_sec": 0, 00:07:56.252 "r_mbytes_per_sec": 0, 00:07:56.252 "w_mbytes_per_sec": 0 00:07:56.252 }, 00:07:56.252 "claimed": true, 00:07:56.252 "claim_type": "exclusive_write", 00:07:56.252 "zoned": false, 00:07:56.252 "supported_io_types": { 00:07:56.252 "read": true, 00:07:56.252 "write": true, 00:07:56.253 "unmap": true, 00:07:56.253 "flush": true, 00:07:56.253 "reset": true, 00:07:56.253 "nvme_admin": false, 00:07:56.253 "nvme_io": false, 00:07:56.253 "nvme_io_md": false, 00:07:56.253 "write_zeroes": true, 00:07:56.253 "zcopy": true, 00:07:56.253 "get_zone_info": false, 00:07:56.253 "zone_management": false, 00:07:56.253 "zone_append": false, 00:07:56.253 "compare": false, 00:07:56.253 "compare_and_write": false, 00:07:56.253 "abort": true, 00:07:56.253 "seek_hole": false, 00:07:56.253 "seek_data": false, 00:07:56.253 "copy": true, 00:07:56.253 "nvme_iov_md": false 00:07:56.253 }, 00:07:56.253 "memory_domains": [ 00:07:56.253 { 00:07:56.253 "dma_device_id": "system", 00:07:56.253 "dma_device_type": 1 00:07:56.253 }, 00:07:56.253 { 00:07:56.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.253 "dma_device_type": 2 00:07:56.253 } 00:07:56.253 ], 00:07:56.253 "driver_specific": {} 00:07:56.253 } 00:07:56.253 ] 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.253 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.513 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.513 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.513 "name": "Existed_Raid", 00:07:56.513 "uuid": "651339fa-e994-4a50-8e87-beacf26b2d03", 00:07:56.513 "strip_size_kb": 64, 00:07:56.513 "state": "online", 00:07:56.513 "raid_level": "raid0", 00:07:56.513 "superblock": false, 00:07:56.513 "num_base_bdevs": 2, 00:07:56.513 "num_base_bdevs_discovered": 2, 00:07:56.513 "num_base_bdevs_operational": 2, 00:07:56.513 "base_bdevs_list": [ 00:07:56.513 { 00:07:56.513 "name": "BaseBdev1", 00:07:56.513 "uuid": "57ec3db9-60bf-43f9-a132-d3ca8e4ba60e", 00:07:56.513 "is_configured": true, 00:07:56.513 "data_offset": 0, 00:07:56.513 "data_size": 65536 00:07:56.513 }, 00:07:56.513 { 00:07:56.513 "name": "BaseBdev2", 00:07:56.513 "uuid": "b982b26c-a78c-4f40-9ad3-8a54242d7fd1", 00:07:56.513 "is_configured": true, 00:07:56.513 "data_offset": 0, 00:07:56.513 "data_size": 65536 00:07:56.513 } 00:07:56.513 ] 00:07:56.513 }' 00:07:56.513 01:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.513 01:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.773 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.773 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.773 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.773 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.773 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 [2024-11-17 01:28:05.133597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.774 "name": "Existed_Raid", 00:07:56.774 "aliases": [ 00:07:56.774 "651339fa-e994-4a50-8e87-beacf26b2d03" 00:07:56.774 ], 00:07:56.774 "product_name": "Raid Volume", 00:07:56.774 "block_size": 512, 00:07:56.774 "num_blocks": 131072, 00:07:56.774 "uuid": "651339fa-e994-4a50-8e87-beacf26b2d03", 00:07:56.774 "assigned_rate_limits": { 00:07:56.774 "rw_ios_per_sec": 0, 00:07:56.774 "rw_mbytes_per_sec": 0, 00:07:56.774 "r_mbytes_per_sec": 0, 00:07:56.774 "w_mbytes_per_sec": 0 00:07:56.774 }, 00:07:56.774 "claimed": false, 00:07:56.774 "zoned": false, 00:07:56.774 "supported_io_types": { 00:07:56.774 "read": true, 00:07:56.774 "write": true, 00:07:56.774 "unmap": true, 00:07:56.774 "flush": true, 00:07:56.774 "reset": true, 00:07:56.774 "nvme_admin": false, 00:07:56.774 "nvme_io": false, 00:07:56.774 "nvme_io_md": false, 00:07:56.774 "write_zeroes": true, 00:07:56.774 "zcopy": false, 00:07:56.774 "get_zone_info": false, 00:07:56.774 "zone_management": false, 00:07:56.774 "zone_append": false, 00:07:56.774 "compare": false, 00:07:56.774 "compare_and_write": false, 00:07:56.774 "abort": false, 00:07:56.774 "seek_hole": false, 00:07:56.774 "seek_data": false, 00:07:56.774 "copy": false, 00:07:56.774 "nvme_iov_md": false 00:07:56.774 }, 00:07:56.774 "memory_domains": [ 00:07:56.774 { 00:07:56.774 "dma_device_id": "system", 00:07:56.774 "dma_device_type": 1 00:07:56.774 }, 00:07:56.774 { 00:07:56.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.774 "dma_device_type": 2 00:07:56.774 }, 00:07:56.774 { 00:07:56.774 "dma_device_id": "system", 00:07:56.774 "dma_device_type": 1 00:07:56.774 }, 00:07:56.774 { 00:07:56.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.774 "dma_device_type": 2 00:07:56.774 } 00:07:56.774 ], 00:07:56.774 "driver_specific": { 00:07:56.774 "raid": { 00:07:56.774 "uuid": "651339fa-e994-4a50-8e87-beacf26b2d03", 00:07:56.774 "strip_size_kb": 64, 00:07:56.774 "state": "online", 00:07:56.774 "raid_level": "raid0", 00:07:56.774 "superblock": false, 00:07:56.774 "num_base_bdevs": 2, 00:07:56.774 "num_base_bdevs_discovered": 2, 00:07:56.774 "num_base_bdevs_operational": 2, 00:07:56.774 "base_bdevs_list": [ 00:07:56.774 { 00:07:56.774 "name": "BaseBdev1", 00:07:56.774 "uuid": "57ec3db9-60bf-43f9-a132-d3ca8e4ba60e", 00:07:56.774 "is_configured": true, 00:07:56.774 "data_offset": 0, 00:07:56.774 "data_size": 65536 00:07:56.774 }, 00:07:56.774 { 00:07:56.774 "name": "BaseBdev2", 00:07:56.774 "uuid": "b982b26c-a78c-4f40-9ad3-8a54242d7fd1", 00:07:56.774 "is_configured": true, 00:07:56.774 "data_offset": 0, 00:07:56.774 "data_size": 65536 00:07:56.774 } 00:07:56.774 ] 00:07:56.774 } 00:07:56.774 } 00:07:56.774 }' 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:56.774 BaseBdev2' 00:07:56.774 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.034 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.035 [2024-11-17 01:28:05.344991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.035 [2024-11-17 01:28:05.345023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.035 [2024-11-17 01:28:05.345068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.035 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.295 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.295 "name": "Existed_Raid", 00:07:57.295 "uuid": "651339fa-e994-4a50-8e87-beacf26b2d03", 00:07:57.295 "strip_size_kb": 64, 00:07:57.295 "state": "offline", 00:07:57.295 "raid_level": "raid0", 00:07:57.295 "superblock": false, 00:07:57.295 "num_base_bdevs": 2, 00:07:57.295 "num_base_bdevs_discovered": 1, 00:07:57.295 "num_base_bdevs_operational": 1, 00:07:57.295 "base_bdevs_list": [ 00:07:57.295 { 00:07:57.295 "name": null, 00:07:57.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.295 "is_configured": false, 00:07:57.295 "data_offset": 0, 00:07:57.295 "data_size": 65536 00:07:57.295 }, 00:07:57.295 { 00:07:57.295 "name": "BaseBdev2", 00:07:57.295 "uuid": "b982b26c-a78c-4f40-9ad3-8a54242d7fd1", 00:07:57.295 "is_configured": true, 00:07:57.295 "data_offset": 0, 00:07:57.295 "data_size": 65536 00:07:57.295 } 00:07:57.295 ] 00:07:57.295 }' 00:07:57.295 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.295 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.555 [2024-11-17 01:28:05.891340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:57.555 [2024-11-17 01:28:05.891392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.555 01:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60573 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60573 ']' 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60573 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60573 00:07:57.815 killing process with pid 60573 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60573' 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60573 00:07:57.815 [2024-11-17 01:28:06.065898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.815 01:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60573 00:07:57.815 [2024-11-17 01:28:06.082500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:58.777 00:07:58.777 real 0m4.838s 00:07:58.777 user 0m7.019s 00:07:58.777 sys 0m0.780s 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.777 ************************************ 00:07:58.777 END TEST raid_state_function_test 00:07:58.777 ************************************ 00:07:58.777 01:28:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:58.777 01:28:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.777 01:28:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.777 01:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.777 ************************************ 00:07:58.777 START TEST raid_state_function_test_sb 00:07:58.777 ************************************ 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:58.777 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60820 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60820' 00:07:58.778 Process raid pid: 60820 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60820 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60820 ']' 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.778 01:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.037 [2024-11-17 01:28:07.299276] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:59.037 [2024-11-17 01:28:07.299475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.037 [2024-11-17 01:28:07.473812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.297 [2024-11-17 01:28:07.582329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.556 [2024-11-17 01:28:07.773124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.556 [2024-11-17 01:28:07.773155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.817 [2024-11-17 01:28:08.124670] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.817 [2024-11-17 01:28:08.124723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.817 [2024-11-17 01:28:08.124733] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.817 [2024-11-17 01:28:08.124743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.817 "name": "Existed_Raid", 00:07:59.817 "uuid": "c51fc476-08c6-4087-96e2-78000338a31c", 00:07:59.817 "strip_size_kb": 64, 00:07:59.817 "state": "configuring", 00:07:59.817 "raid_level": "raid0", 00:07:59.817 "superblock": true, 00:07:59.817 "num_base_bdevs": 2, 00:07:59.817 "num_base_bdevs_discovered": 0, 00:07:59.817 "num_base_bdevs_operational": 2, 00:07:59.817 "base_bdevs_list": [ 00:07:59.817 { 00:07:59.817 "name": "BaseBdev1", 00:07:59.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.817 "is_configured": false, 00:07:59.817 "data_offset": 0, 00:07:59.817 "data_size": 0 00:07:59.817 }, 00:07:59.817 { 00:07:59.817 "name": "BaseBdev2", 00:07:59.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.817 "is_configured": false, 00:07:59.817 "data_offset": 0, 00:07:59.817 "data_size": 0 00:07:59.817 } 00:07:59.817 ] 00:07:59.817 }' 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.817 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.389 [2024-11-17 01:28:08.599811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.389 [2024-11-17 01:28:08.599906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.389 [2024-11-17 01:28:08.611800] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.389 [2024-11-17 01:28:08.611885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.389 [2024-11-17 01:28:08.611922] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.389 [2024-11-17 01:28:08.611952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.389 [2024-11-17 01:28:08.661180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.389 BaseBdev1 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.389 [ 00:08:00.389 { 00:08:00.389 "name": "BaseBdev1", 00:08:00.389 "aliases": [ 00:08:00.389 "45d1de86-d0a9-4067-a166-826d9ad0e49c" 00:08:00.389 ], 00:08:00.389 "product_name": "Malloc disk", 00:08:00.389 "block_size": 512, 00:08:00.389 "num_blocks": 65536, 00:08:00.389 "uuid": "45d1de86-d0a9-4067-a166-826d9ad0e49c", 00:08:00.389 "assigned_rate_limits": { 00:08:00.389 "rw_ios_per_sec": 0, 00:08:00.389 "rw_mbytes_per_sec": 0, 00:08:00.389 "r_mbytes_per_sec": 0, 00:08:00.389 "w_mbytes_per_sec": 0 00:08:00.389 }, 00:08:00.389 "claimed": true, 00:08:00.389 "claim_type": "exclusive_write", 00:08:00.389 "zoned": false, 00:08:00.389 "supported_io_types": { 00:08:00.389 "read": true, 00:08:00.389 "write": true, 00:08:00.389 "unmap": true, 00:08:00.389 "flush": true, 00:08:00.389 "reset": true, 00:08:00.389 "nvme_admin": false, 00:08:00.389 "nvme_io": false, 00:08:00.389 "nvme_io_md": false, 00:08:00.389 "write_zeroes": true, 00:08:00.389 "zcopy": true, 00:08:00.389 "get_zone_info": false, 00:08:00.389 "zone_management": false, 00:08:00.389 "zone_append": false, 00:08:00.389 "compare": false, 00:08:00.389 "compare_and_write": false, 00:08:00.389 "abort": true, 00:08:00.389 "seek_hole": false, 00:08:00.389 "seek_data": false, 00:08:00.389 "copy": true, 00:08:00.389 "nvme_iov_md": false 00:08:00.389 }, 00:08:00.389 "memory_domains": [ 00:08:00.389 { 00:08:00.389 "dma_device_id": "system", 00:08:00.389 "dma_device_type": 1 00:08:00.389 }, 00:08:00.389 { 00:08:00.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.389 "dma_device_type": 2 00:08:00.389 } 00:08:00.389 ], 00:08:00.389 "driver_specific": {} 00:08:00.389 } 00:08:00.389 ] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.389 "name": "Existed_Raid", 00:08:00.389 "uuid": "1ef72c5b-017a-4d57-9401-88becd4a3623", 00:08:00.389 "strip_size_kb": 64, 00:08:00.389 "state": "configuring", 00:08:00.389 "raid_level": "raid0", 00:08:00.389 "superblock": true, 00:08:00.389 "num_base_bdevs": 2, 00:08:00.389 "num_base_bdevs_discovered": 1, 00:08:00.389 "num_base_bdevs_operational": 2, 00:08:00.389 "base_bdevs_list": [ 00:08:00.389 { 00:08:00.389 "name": "BaseBdev1", 00:08:00.389 "uuid": "45d1de86-d0a9-4067-a166-826d9ad0e49c", 00:08:00.389 "is_configured": true, 00:08:00.389 "data_offset": 2048, 00:08:00.389 "data_size": 63488 00:08:00.389 }, 00:08:00.389 { 00:08:00.389 "name": "BaseBdev2", 00:08:00.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.389 "is_configured": false, 00:08:00.389 "data_offset": 0, 00:08:00.389 "data_size": 0 00:08:00.389 } 00:08:00.389 ] 00:08:00.389 }' 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.389 01:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.649 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.649 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.649 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.908 [2024-11-17 01:28:09.108465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.908 [2024-11-17 01:28:09.108520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.908 [2024-11-17 01:28:09.120497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.908 [2024-11-17 01:28:09.122281] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.908 [2024-11-17 01:28:09.122324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.908 "name": "Existed_Raid", 00:08:00.908 "uuid": "e5824ef9-b26f-4d92-abe2-3bd8b6eb7665", 00:08:00.908 "strip_size_kb": 64, 00:08:00.908 "state": "configuring", 00:08:00.908 "raid_level": "raid0", 00:08:00.908 "superblock": true, 00:08:00.908 "num_base_bdevs": 2, 00:08:00.908 "num_base_bdevs_discovered": 1, 00:08:00.908 "num_base_bdevs_operational": 2, 00:08:00.908 "base_bdevs_list": [ 00:08:00.908 { 00:08:00.908 "name": "BaseBdev1", 00:08:00.908 "uuid": "45d1de86-d0a9-4067-a166-826d9ad0e49c", 00:08:00.908 "is_configured": true, 00:08:00.908 "data_offset": 2048, 00:08:00.908 "data_size": 63488 00:08:00.908 }, 00:08:00.908 { 00:08:00.908 "name": "BaseBdev2", 00:08:00.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.908 "is_configured": false, 00:08:00.908 "data_offset": 0, 00:08:00.908 "data_size": 0 00:08:00.908 } 00:08:00.908 ] 00:08:00.908 }' 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.908 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.169 [2024-11-17 01:28:09.566038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.169 [2024-11-17 01:28:09.566434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.169 [2024-11-17 01:28:09.566488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:01.169 [2024-11-17 01:28:09.566788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:01.169 [2024-11-17 01:28:09.566977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.169 [2024-11-17 01:28:09.567038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:01.169 BaseBdev2 00:08:01.169 [2024-11-17 01:28:09.567225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.169 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.170 [ 00:08:01.170 { 00:08:01.170 "name": "BaseBdev2", 00:08:01.170 "aliases": [ 00:08:01.170 "397a960b-3832-4a92-afc8-d19c6cecb542" 00:08:01.170 ], 00:08:01.170 "product_name": "Malloc disk", 00:08:01.170 "block_size": 512, 00:08:01.170 "num_blocks": 65536, 00:08:01.170 "uuid": "397a960b-3832-4a92-afc8-d19c6cecb542", 00:08:01.170 "assigned_rate_limits": { 00:08:01.170 "rw_ios_per_sec": 0, 00:08:01.170 "rw_mbytes_per_sec": 0, 00:08:01.170 "r_mbytes_per_sec": 0, 00:08:01.170 "w_mbytes_per_sec": 0 00:08:01.170 }, 00:08:01.170 "claimed": true, 00:08:01.170 "claim_type": "exclusive_write", 00:08:01.170 "zoned": false, 00:08:01.170 "supported_io_types": { 00:08:01.170 "read": true, 00:08:01.170 "write": true, 00:08:01.170 "unmap": true, 00:08:01.170 "flush": true, 00:08:01.170 "reset": true, 00:08:01.170 "nvme_admin": false, 00:08:01.170 "nvme_io": false, 00:08:01.170 "nvme_io_md": false, 00:08:01.170 "write_zeroes": true, 00:08:01.170 "zcopy": true, 00:08:01.170 "get_zone_info": false, 00:08:01.170 "zone_management": false, 00:08:01.170 "zone_append": false, 00:08:01.170 "compare": false, 00:08:01.170 "compare_and_write": false, 00:08:01.170 "abort": true, 00:08:01.170 "seek_hole": false, 00:08:01.170 "seek_data": false, 00:08:01.170 "copy": true, 00:08:01.170 "nvme_iov_md": false 00:08:01.170 }, 00:08:01.170 "memory_domains": [ 00:08:01.170 { 00:08:01.170 "dma_device_id": "system", 00:08:01.170 "dma_device_type": 1 00:08:01.170 }, 00:08:01.170 { 00:08:01.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.170 "dma_device_type": 2 00:08:01.170 } 00:08:01.170 ], 00:08:01.170 "driver_specific": {} 00:08:01.170 } 00:08:01.170 ] 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.170 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.431 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.431 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.431 "name": "Existed_Raid", 00:08:01.431 "uuid": "e5824ef9-b26f-4d92-abe2-3bd8b6eb7665", 00:08:01.431 "strip_size_kb": 64, 00:08:01.431 "state": "online", 00:08:01.431 "raid_level": "raid0", 00:08:01.431 "superblock": true, 00:08:01.431 "num_base_bdevs": 2, 00:08:01.431 "num_base_bdevs_discovered": 2, 00:08:01.431 "num_base_bdevs_operational": 2, 00:08:01.431 "base_bdevs_list": [ 00:08:01.431 { 00:08:01.431 "name": "BaseBdev1", 00:08:01.431 "uuid": "45d1de86-d0a9-4067-a166-826d9ad0e49c", 00:08:01.431 "is_configured": true, 00:08:01.431 "data_offset": 2048, 00:08:01.431 "data_size": 63488 00:08:01.431 }, 00:08:01.431 { 00:08:01.431 "name": "BaseBdev2", 00:08:01.431 "uuid": "397a960b-3832-4a92-afc8-d19c6cecb542", 00:08:01.431 "is_configured": true, 00:08:01.431 "data_offset": 2048, 00:08:01.431 "data_size": 63488 00:08:01.431 } 00:08:01.431 ] 00:08:01.431 }' 00:08:01.431 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.431 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.691 [2024-11-17 01:28:09.985623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.691 01:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.691 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.691 "name": "Existed_Raid", 00:08:01.691 "aliases": [ 00:08:01.691 "e5824ef9-b26f-4d92-abe2-3bd8b6eb7665" 00:08:01.691 ], 00:08:01.691 "product_name": "Raid Volume", 00:08:01.691 "block_size": 512, 00:08:01.691 "num_blocks": 126976, 00:08:01.691 "uuid": "e5824ef9-b26f-4d92-abe2-3bd8b6eb7665", 00:08:01.691 "assigned_rate_limits": { 00:08:01.691 "rw_ios_per_sec": 0, 00:08:01.691 "rw_mbytes_per_sec": 0, 00:08:01.691 "r_mbytes_per_sec": 0, 00:08:01.691 "w_mbytes_per_sec": 0 00:08:01.691 }, 00:08:01.691 "claimed": false, 00:08:01.691 "zoned": false, 00:08:01.691 "supported_io_types": { 00:08:01.691 "read": true, 00:08:01.691 "write": true, 00:08:01.691 "unmap": true, 00:08:01.691 "flush": true, 00:08:01.691 "reset": true, 00:08:01.691 "nvme_admin": false, 00:08:01.691 "nvme_io": false, 00:08:01.691 "nvme_io_md": false, 00:08:01.691 "write_zeroes": true, 00:08:01.691 "zcopy": false, 00:08:01.691 "get_zone_info": false, 00:08:01.691 "zone_management": false, 00:08:01.691 "zone_append": false, 00:08:01.691 "compare": false, 00:08:01.691 "compare_and_write": false, 00:08:01.691 "abort": false, 00:08:01.691 "seek_hole": false, 00:08:01.691 "seek_data": false, 00:08:01.691 "copy": false, 00:08:01.691 "nvme_iov_md": false 00:08:01.691 }, 00:08:01.691 "memory_domains": [ 00:08:01.691 { 00:08:01.691 "dma_device_id": "system", 00:08:01.691 "dma_device_type": 1 00:08:01.691 }, 00:08:01.691 { 00:08:01.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.691 "dma_device_type": 2 00:08:01.691 }, 00:08:01.691 { 00:08:01.691 "dma_device_id": "system", 00:08:01.691 "dma_device_type": 1 00:08:01.691 }, 00:08:01.691 { 00:08:01.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.691 "dma_device_type": 2 00:08:01.691 } 00:08:01.691 ], 00:08:01.691 "driver_specific": { 00:08:01.691 "raid": { 00:08:01.691 "uuid": "e5824ef9-b26f-4d92-abe2-3bd8b6eb7665", 00:08:01.691 "strip_size_kb": 64, 00:08:01.691 "state": "online", 00:08:01.691 "raid_level": "raid0", 00:08:01.691 "superblock": true, 00:08:01.691 "num_base_bdevs": 2, 00:08:01.691 "num_base_bdevs_discovered": 2, 00:08:01.691 "num_base_bdevs_operational": 2, 00:08:01.691 "base_bdevs_list": [ 00:08:01.691 { 00:08:01.691 "name": "BaseBdev1", 00:08:01.691 "uuid": "45d1de86-d0a9-4067-a166-826d9ad0e49c", 00:08:01.691 "is_configured": true, 00:08:01.691 "data_offset": 2048, 00:08:01.691 "data_size": 63488 00:08:01.691 }, 00:08:01.691 { 00:08:01.691 "name": "BaseBdev2", 00:08:01.691 "uuid": "397a960b-3832-4a92-afc8-d19c6cecb542", 00:08:01.691 "is_configured": true, 00:08:01.691 "data_offset": 2048, 00:08:01.691 "data_size": 63488 00:08:01.691 } 00:08:01.691 ] 00:08:01.691 } 00:08:01.691 } 00:08:01.691 }' 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:01.692 BaseBdev2' 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.692 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.952 [2024-11-17 01:28:10.232987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.952 [2024-11-17 01:28:10.233074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.952 [2024-11-17 01:28:10.233137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.952 "name": "Existed_Raid", 00:08:01.952 "uuid": "e5824ef9-b26f-4d92-abe2-3bd8b6eb7665", 00:08:01.952 "strip_size_kb": 64, 00:08:01.952 "state": "offline", 00:08:01.952 "raid_level": "raid0", 00:08:01.952 "superblock": true, 00:08:01.952 "num_base_bdevs": 2, 00:08:01.952 "num_base_bdevs_discovered": 1, 00:08:01.952 "num_base_bdevs_operational": 1, 00:08:01.952 "base_bdevs_list": [ 00:08:01.952 { 00:08:01.952 "name": null, 00:08:01.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.952 "is_configured": false, 00:08:01.952 "data_offset": 0, 00:08:01.952 "data_size": 63488 00:08:01.952 }, 00:08:01.952 { 00:08:01.952 "name": "BaseBdev2", 00:08:01.952 "uuid": "397a960b-3832-4a92-afc8-d19c6cecb542", 00:08:01.952 "is_configured": true, 00:08:01.952 "data_offset": 2048, 00:08:01.952 "data_size": 63488 00:08:01.952 } 00:08:01.952 ] 00:08:01.952 }' 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.952 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.522 [2024-11-17 01:28:10.789679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.522 [2024-11-17 01:28:10.789788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60820 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60820 ']' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60820 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60820 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.522 killing process with pid 60820 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60820' 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60820 00:08:02.522 [2024-11-17 01:28:10.979904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.522 01:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60820 00:08:02.782 [2024-11-17 01:28:10.995846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.720 01:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:03.720 00:08:03.720 real 0m4.874s 00:08:03.720 user 0m6.991s 00:08:03.720 sys 0m0.804s 00:08:03.720 01:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.720 01:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 ************************************ 00:08:03.720 END TEST raid_state_function_test_sb 00:08:03.720 ************************************ 00:08:03.720 01:28:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:03.720 01:28:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:03.720 01:28:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.720 01:28:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 ************************************ 00:08:03.720 START TEST raid_superblock_test 00:08:03.720 ************************************ 00:08:03.720 01:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61072 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61072 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61072 ']' 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.721 01:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.980 [2024-11-17 01:28:12.248344] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:03.980 [2024-11-17 01:28:12.248605] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61072 ] 00:08:04.239 [2024-11-17 01:28:12.445755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.239 [2024-11-17 01:28:12.557844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.497 [2024-11-17 01:28:12.744531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.497 [2024-11-17 01:28:12.744588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.756 malloc1 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.756 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.757 [2024-11-17 01:28:13.110335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:04.757 [2024-11-17 01:28:13.110438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.757 [2024-11-17 01:28:13.110479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:04.757 [2024-11-17 01:28:13.110509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.757 [2024-11-17 01:28:13.112524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.757 [2024-11-17 01:28:13.112594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:04.757 pt1 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.757 malloc2 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.757 [2024-11-17 01:28:13.166513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.757 [2024-11-17 01:28:13.166609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.757 [2024-11-17 01:28:13.166646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:04.757 [2024-11-17 01:28:13.166674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.757 [2024-11-17 01:28:13.168690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.757 [2024-11-17 01:28:13.168763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.757 pt2 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.757 [2024-11-17 01:28:13.178550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:04.757 [2024-11-17 01:28:13.180295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.757 [2024-11-17 01:28:13.180447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:04.757 [2024-11-17 01:28:13.180460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.757 [2024-11-17 01:28:13.180677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:04.757 [2024-11-17 01:28:13.180833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:04.757 [2024-11-17 01:28:13.180845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:04.757 [2024-11-17 01:28:13.180987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.757 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.016 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.016 "name": "raid_bdev1", 00:08:05.016 "uuid": "da18ba98-934e-4b38-9b16-13e6078abf76", 00:08:05.016 "strip_size_kb": 64, 00:08:05.016 "state": "online", 00:08:05.016 "raid_level": "raid0", 00:08:05.016 "superblock": true, 00:08:05.016 "num_base_bdevs": 2, 00:08:05.016 "num_base_bdevs_discovered": 2, 00:08:05.016 "num_base_bdevs_operational": 2, 00:08:05.016 "base_bdevs_list": [ 00:08:05.016 { 00:08:05.016 "name": "pt1", 00:08:05.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.016 "is_configured": true, 00:08:05.016 "data_offset": 2048, 00:08:05.016 "data_size": 63488 00:08:05.016 }, 00:08:05.016 { 00:08:05.016 "name": "pt2", 00:08:05.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.016 "is_configured": true, 00:08:05.016 "data_offset": 2048, 00:08:05.016 "data_size": 63488 00:08:05.016 } 00:08:05.016 ] 00:08:05.016 }' 00:08:05.016 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.016 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.275 [2024-11-17 01:28:13.606077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.275 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.275 "name": "raid_bdev1", 00:08:05.275 "aliases": [ 00:08:05.275 "da18ba98-934e-4b38-9b16-13e6078abf76" 00:08:05.275 ], 00:08:05.275 "product_name": "Raid Volume", 00:08:05.275 "block_size": 512, 00:08:05.275 "num_blocks": 126976, 00:08:05.275 "uuid": "da18ba98-934e-4b38-9b16-13e6078abf76", 00:08:05.275 "assigned_rate_limits": { 00:08:05.275 "rw_ios_per_sec": 0, 00:08:05.275 "rw_mbytes_per_sec": 0, 00:08:05.275 "r_mbytes_per_sec": 0, 00:08:05.275 "w_mbytes_per_sec": 0 00:08:05.275 }, 00:08:05.275 "claimed": false, 00:08:05.275 "zoned": false, 00:08:05.275 "supported_io_types": { 00:08:05.275 "read": true, 00:08:05.275 "write": true, 00:08:05.275 "unmap": true, 00:08:05.275 "flush": true, 00:08:05.275 "reset": true, 00:08:05.275 "nvme_admin": false, 00:08:05.275 "nvme_io": false, 00:08:05.275 "nvme_io_md": false, 00:08:05.276 "write_zeroes": true, 00:08:05.276 "zcopy": false, 00:08:05.276 "get_zone_info": false, 00:08:05.276 "zone_management": false, 00:08:05.276 "zone_append": false, 00:08:05.276 "compare": false, 00:08:05.276 "compare_and_write": false, 00:08:05.276 "abort": false, 00:08:05.276 "seek_hole": false, 00:08:05.276 "seek_data": false, 00:08:05.276 "copy": false, 00:08:05.276 "nvme_iov_md": false 00:08:05.276 }, 00:08:05.276 "memory_domains": [ 00:08:05.276 { 00:08:05.276 "dma_device_id": "system", 00:08:05.276 "dma_device_type": 1 00:08:05.276 }, 00:08:05.276 { 00:08:05.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.276 "dma_device_type": 2 00:08:05.276 }, 00:08:05.276 { 00:08:05.276 "dma_device_id": "system", 00:08:05.276 "dma_device_type": 1 00:08:05.276 }, 00:08:05.276 { 00:08:05.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.276 "dma_device_type": 2 00:08:05.276 } 00:08:05.276 ], 00:08:05.276 "driver_specific": { 00:08:05.276 "raid": { 00:08:05.276 "uuid": "da18ba98-934e-4b38-9b16-13e6078abf76", 00:08:05.276 "strip_size_kb": 64, 00:08:05.276 "state": "online", 00:08:05.276 "raid_level": "raid0", 00:08:05.276 "superblock": true, 00:08:05.276 "num_base_bdevs": 2, 00:08:05.276 "num_base_bdevs_discovered": 2, 00:08:05.276 "num_base_bdevs_operational": 2, 00:08:05.276 "base_bdevs_list": [ 00:08:05.276 { 00:08:05.276 "name": "pt1", 00:08:05.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.276 "is_configured": true, 00:08:05.276 "data_offset": 2048, 00:08:05.276 "data_size": 63488 00:08:05.276 }, 00:08:05.276 { 00:08:05.276 "name": "pt2", 00:08:05.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.276 "is_configured": true, 00:08:05.276 "data_offset": 2048, 00:08:05.276 "data_size": 63488 00:08:05.276 } 00:08:05.276 ] 00:08:05.276 } 00:08:05.276 } 00:08:05.276 }' 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:05.276 pt2' 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.276 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:05.535 [2024-11-17 01:28:13.825680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=da18ba98-934e-4b38-9b16-13e6078abf76 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z da18ba98-934e-4b38-9b16-13e6078abf76 ']' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 [2024-11-17 01:28:13.853318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.535 [2024-11-17 01:28:13.853383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.535 [2024-11-17 01:28:13.853471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.535 [2024-11-17 01:28:13.853523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.535 [2024-11-17 01:28:13.853534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.535 [2024-11-17 01:28:13.985129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:05.535 [2024-11-17 01:28:13.986988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:05.535 [2024-11-17 01:28:13.987061] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:05.535 [2024-11-17 01:28:13.987108] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:05.535 [2024-11-17 01:28:13.987123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.535 [2024-11-17 01:28:13.987135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:05.535 request: 00:08:05.535 { 00:08:05.535 "name": "raid_bdev1", 00:08:05.535 "raid_level": "raid0", 00:08:05.535 "base_bdevs": [ 00:08:05.535 "malloc1", 00:08:05.535 "malloc2" 00:08:05.535 ], 00:08:05.535 "strip_size_kb": 64, 00:08:05.535 "superblock": false, 00:08:05.535 "method": "bdev_raid_create", 00:08:05.535 "req_id": 1 00:08:05.535 } 00:08:05.535 Got JSON-RPC error response 00:08:05.535 response: 00:08:05.535 { 00:08:05.535 "code": -17, 00:08:05.535 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:05.535 } 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:05.535 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:05.848 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.848 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:05.848 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.848 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.848 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.848 01:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.848 01:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:05.848 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.848 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:05.848 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:05.848 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:05.848 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.848 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.848 [2024-11-17 01:28:14.049011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:05.848 [2024-11-17 01:28:14.049127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.848 [2024-11-17 01:28:14.049166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:05.848 [2024-11-17 01:28:14.049198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.848 [2024-11-17 01:28:14.051466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.849 [2024-11-17 01:28:14.051552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:05.849 [2024-11-17 01:28:14.051673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:05.849 [2024-11-17 01:28:14.051800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.849 pt1 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.849 "name": "raid_bdev1", 00:08:05.849 "uuid": "da18ba98-934e-4b38-9b16-13e6078abf76", 00:08:05.849 "strip_size_kb": 64, 00:08:05.849 "state": "configuring", 00:08:05.849 "raid_level": "raid0", 00:08:05.849 "superblock": true, 00:08:05.849 "num_base_bdevs": 2, 00:08:05.849 "num_base_bdevs_discovered": 1, 00:08:05.849 "num_base_bdevs_operational": 2, 00:08:05.849 "base_bdevs_list": [ 00:08:05.849 { 00:08:05.849 "name": "pt1", 00:08:05.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.849 "is_configured": true, 00:08:05.849 "data_offset": 2048, 00:08:05.849 "data_size": 63488 00:08:05.849 }, 00:08:05.849 { 00:08:05.849 "name": null, 00:08:05.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.849 "is_configured": false, 00:08:05.849 "data_offset": 2048, 00:08:05.849 "data_size": 63488 00:08:05.849 } 00:08:05.849 ] 00:08:05.849 }' 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.849 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.135 [2024-11-17 01:28:14.464322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:06.135 [2024-11-17 01:28:14.464410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.135 [2024-11-17 01:28:14.464432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:06.135 [2024-11-17 01:28:14.464444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.135 [2024-11-17 01:28:14.464921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.135 [2024-11-17 01:28:14.464955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:06.135 [2024-11-17 01:28:14.465034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:06.135 [2024-11-17 01:28:14.465056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:06.135 [2024-11-17 01:28:14.465184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.135 [2024-11-17 01:28:14.465195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.135 [2024-11-17 01:28:14.465412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:06.135 [2024-11-17 01:28:14.465569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.135 [2024-11-17 01:28:14.465578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:06.135 [2024-11-17 01:28:14.465709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.135 pt2 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.135 "name": "raid_bdev1", 00:08:06.135 "uuid": "da18ba98-934e-4b38-9b16-13e6078abf76", 00:08:06.135 "strip_size_kb": 64, 00:08:06.135 "state": "online", 00:08:06.135 "raid_level": "raid0", 00:08:06.135 "superblock": true, 00:08:06.135 "num_base_bdevs": 2, 00:08:06.135 "num_base_bdevs_discovered": 2, 00:08:06.135 "num_base_bdevs_operational": 2, 00:08:06.135 "base_bdevs_list": [ 00:08:06.135 { 00:08:06.135 "name": "pt1", 00:08:06.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.135 "is_configured": true, 00:08:06.135 "data_offset": 2048, 00:08:06.135 "data_size": 63488 00:08:06.135 }, 00:08:06.135 { 00:08:06.135 "name": "pt2", 00:08:06.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.135 "is_configured": true, 00:08:06.135 "data_offset": 2048, 00:08:06.135 "data_size": 63488 00:08:06.135 } 00:08:06.135 ] 00:08:06.135 }' 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.135 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.705 [2024-11-17 01:28:14.919814] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.705 "name": "raid_bdev1", 00:08:06.705 "aliases": [ 00:08:06.705 "da18ba98-934e-4b38-9b16-13e6078abf76" 00:08:06.705 ], 00:08:06.705 "product_name": "Raid Volume", 00:08:06.705 "block_size": 512, 00:08:06.705 "num_blocks": 126976, 00:08:06.705 "uuid": "da18ba98-934e-4b38-9b16-13e6078abf76", 00:08:06.705 "assigned_rate_limits": { 00:08:06.705 "rw_ios_per_sec": 0, 00:08:06.705 "rw_mbytes_per_sec": 0, 00:08:06.705 "r_mbytes_per_sec": 0, 00:08:06.705 "w_mbytes_per_sec": 0 00:08:06.705 }, 00:08:06.705 "claimed": false, 00:08:06.705 "zoned": false, 00:08:06.705 "supported_io_types": { 00:08:06.705 "read": true, 00:08:06.705 "write": true, 00:08:06.705 "unmap": true, 00:08:06.705 "flush": true, 00:08:06.705 "reset": true, 00:08:06.705 "nvme_admin": false, 00:08:06.705 "nvme_io": false, 00:08:06.705 "nvme_io_md": false, 00:08:06.705 "write_zeroes": true, 00:08:06.705 "zcopy": false, 00:08:06.705 "get_zone_info": false, 00:08:06.705 "zone_management": false, 00:08:06.705 "zone_append": false, 00:08:06.705 "compare": false, 00:08:06.705 "compare_and_write": false, 00:08:06.705 "abort": false, 00:08:06.705 "seek_hole": false, 00:08:06.705 "seek_data": false, 00:08:06.705 "copy": false, 00:08:06.705 "nvme_iov_md": false 00:08:06.705 }, 00:08:06.705 "memory_domains": [ 00:08:06.705 { 00:08:06.705 "dma_device_id": "system", 00:08:06.705 "dma_device_type": 1 00:08:06.705 }, 00:08:06.705 { 00:08:06.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.705 "dma_device_type": 2 00:08:06.705 }, 00:08:06.705 { 00:08:06.705 "dma_device_id": "system", 00:08:06.705 "dma_device_type": 1 00:08:06.705 }, 00:08:06.705 { 00:08:06.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.705 "dma_device_type": 2 00:08:06.705 } 00:08:06.705 ], 00:08:06.705 "driver_specific": { 00:08:06.705 "raid": { 00:08:06.705 "uuid": "da18ba98-934e-4b38-9b16-13e6078abf76", 00:08:06.705 "strip_size_kb": 64, 00:08:06.705 "state": "online", 00:08:06.705 "raid_level": "raid0", 00:08:06.705 "superblock": true, 00:08:06.705 "num_base_bdevs": 2, 00:08:06.705 "num_base_bdevs_discovered": 2, 00:08:06.705 "num_base_bdevs_operational": 2, 00:08:06.705 "base_bdevs_list": [ 00:08:06.705 { 00:08:06.705 "name": "pt1", 00:08:06.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.705 "is_configured": true, 00:08:06.705 "data_offset": 2048, 00:08:06.705 "data_size": 63488 00:08:06.705 }, 00:08:06.705 { 00:08:06.705 "name": "pt2", 00:08:06.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.705 "is_configured": true, 00:08:06.705 "data_offset": 2048, 00:08:06.705 "data_size": 63488 00:08:06.705 } 00:08:06.705 ] 00:08:06.705 } 00:08:06.705 } 00:08:06.705 }' 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:06.705 pt2' 00:08:06.705 01:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.705 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.705 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.705 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.706 [2024-11-17 01:28:15.139398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.706 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' da18ba98-934e-4b38-9b16-13e6078abf76 '!=' da18ba98-934e-4b38-9b16-13e6078abf76 ']' 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61072 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61072 ']' 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61072 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61072 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61072' 00:08:06.966 killing process with pid 61072 00:08:06.966 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61072 00:08:06.966 [2024-11-17 01:28:15.206817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.966 [2024-11-17 01:28:15.206964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.966 [2024-11-17 01:28:15.207063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 01:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61072 00:08:06.966 ee all in destruct 00:08:06.966 [2024-11-17 01:28:15.207128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:06.966 [2024-11-17 01:28:15.405328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.347 01:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:08.347 00:08:08.347 real 0m4.329s 00:08:08.347 user 0m6.045s 00:08:08.347 sys 0m0.740s 00:08:08.347 01:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.347 01:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.347 ************************************ 00:08:08.347 END TEST raid_superblock_test 00:08:08.347 ************************************ 00:08:08.347 01:28:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:08.347 01:28:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.347 01:28:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.347 01:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.347 ************************************ 00:08:08.347 START TEST raid_read_error_test 00:08:08.347 ************************************ 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4m8u0Jnihq 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61278 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61278 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61278 ']' 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.347 01:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.347 [2024-11-17 01:28:16.644443] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:08.347 [2024-11-17 01:28:16.644631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61278 ] 00:08:08.607 [2024-11-17 01:28:16.817041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.607 [2024-11-17 01:28:16.922415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.867 [2024-11-17 01:28:17.106853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.867 [2024-11-17 01:28:17.106984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.127 BaseBdev1_malloc 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.127 true 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.127 [2024-11-17 01:28:17.530340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:09.127 [2024-11-17 01:28:17.530394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.127 [2024-11-17 01:28:17.530429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:09.127 [2024-11-17 01:28:17.530440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.127 [2024-11-17 01:28:17.532527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.127 [2024-11-17 01:28:17.532600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:09.127 BaseBdev1 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.127 BaseBdev2_malloc 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.127 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 true 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 [2024-11-17 01:28:17.593670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:09.388 [2024-11-17 01:28:17.593728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.388 [2024-11-17 01:28:17.593745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:09.388 [2024-11-17 01:28:17.593770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.388 [2024-11-17 01:28:17.595848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.388 [2024-11-17 01:28:17.595886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:09.388 BaseBdev2 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 [2024-11-17 01:28:17.605704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.388 [2024-11-17 01:28:17.607488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.388 [2024-11-17 01:28:17.607664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:09.388 [2024-11-17 01:28:17.607680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:09.388 [2024-11-17 01:28:17.607901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:09.388 [2024-11-17 01:28:17.608063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:09.388 [2024-11-17 01:28:17.608075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:09.388 [2024-11-17 01:28:17.608216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.388 "name": "raid_bdev1", 00:08:09.388 "uuid": "a8e2f550-f570-4d8a-bbf0-30c2a92f15fd", 00:08:09.388 "strip_size_kb": 64, 00:08:09.388 "state": "online", 00:08:09.388 "raid_level": "raid0", 00:08:09.388 "superblock": true, 00:08:09.388 "num_base_bdevs": 2, 00:08:09.388 "num_base_bdevs_discovered": 2, 00:08:09.388 "num_base_bdevs_operational": 2, 00:08:09.388 "base_bdevs_list": [ 00:08:09.388 { 00:08:09.388 "name": "BaseBdev1", 00:08:09.388 "uuid": "f6e6d174-cb25-596d-9537-b55417086f57", 00:08:09.388 "is_configured": true, 00:08:09.388 "data_offset": 2048, 00:08:09.388 "data_size": 63488 00:08:09.388 }, 00:08:09.388 { 00:08:09.388 "name": "BaseBdev2", 00:08:09.388 "uuid": "391dfe1f-b1ee-521d-b469-0d89e8af2712", 00:08:09.388 "is_configured": true, 00:08:09.388 "data_offset": 2048, 00:08:09.388 "data_size": 63488 00:08:09.388 } 00:08:09.388 ] 00:08:09.388 }' 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.388 01:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 01:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:09.648 01:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:09.648 [2024-11-17 01:28:18.086071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.588 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.847 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.847 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.847 "name": "raid_bdev1", 00:08:10.847 "uuid": "a8e2f550-f570-4d8a-bbf0-30c2a92f15fd", 00:08:10.847 "strip_size_kb": 64, 00:08:10.847 "state": "online", 00:08:10.847 "raid_level": "raid0", 00:08:10.847 "superblock": true, 00:08:10.847 "num_base_bdevs": 2, 00:08:10.847 "num_base_bdevs_discovered": 2, 00:08:10.847 "num_base_bdevs_operational": 2, 00:08:10.847 "base_bdevs_list": [ 00:08:10.847 { 00:08:10.847 "name": "BaseBdev1", 00:08:10.847 "uuid": "f6e6d174-cb25-596d-9537-b55417086f57", 00:08:10.847 "is_configured": true, 00:08:10.847 "data_offset": 2048, 00:08:10.847 "data_size": 63488 00:08:10.847 }, 00:08:10.847 { 00:08:10.847 "name": "BaseBdev2", 00:08:10.847 "uuid": "391dfe1f-b1ee-521d-b469-0d89e8af2712", 00:08:10.847 "is_configured": true, 00:08:10.847 "data_offset": 2048, 00:08:10.847 "data_size": 63488 00:08:10.847 } 00:08:10.847 ] 00:08:10.847 }' 00:08:10.847 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.847 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.107 [2024-11-17 01:28:19.497947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.107 [2024-11-17 01:28:19.498045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.107 [2024-11-17 01:28:19.500815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.107 [2024-11-17 01:28:19.500897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.107 [2024-11-17 01:28:19.500958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.107 [2024-11-17 01:28:19.501005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:11.107 { 00:08:11.107 "results": [ 00:08:11.107 { 00:08:11.107 "job": "raid_bdev1", 00:08:11.107 "core_mask": "0x1", 00:08:11.107 "workload": "randrw", 00:08:11.107 "percentage": 50, 00:08:11.107 "status": "finished", 00:08:11.107 "queue_depth": 1, 00:08:11.107 "io_size": 131072, 00:08:11.107 "runtime": 1.412896, 00:08:11.107 "iops": 16630.381854007654, 00:08:11.107 "mibps": 2078.7977317509567, 00:08:11.107 "io_failed": 1, 00:08:11.107 "io_timeout": 0, 00:08:11.107 "avg_latency_us": 83.44695172421994, 00:08:11.107 "min_latency_us": 24.482096069868994, 00:08:11.107 "max_latency_us": 1416.6078602620087 00:08:11.107 } 00:08:11.107 ], 00:08:11.107 "core_count": 1 00:08:11.107 } 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61278 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61278 ']' 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61278 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61278 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61278' 00:08:11.107 killing process with pid 61278 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61278 00:08:11.107 [2024-11-17 01:28:19.527889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.107 01:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61278 00:08:11.367 [2024-11-17 01:28:19.661657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4m8u0Jnihq 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:12.359 ************************************ 00:08:12.359 END TEST raid_read_error_test 00:08:12.359 ************************************ 00:08:12.359 00:08:12.359 real 0m4.270s 00:08:12.359 user 0m5.086s 00:08:12.359 sys 0m0.532s 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.359 01:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.619 01:28:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:12.619 01:28:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.619 01:28:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.619 01:28:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.619 ************************************ 00:08:12.619 START TEST raid_write_error_test 00:08:12.619 ************************************ 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6l3NfCCyRI 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61424 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61424 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61424 ']' 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.619 01:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.619 [2024-11-17 01:28:20.983703] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:12.619 [2024-11-17 01:28:20.983917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61424 ] 00:08:12.878 [2024-11-17 01:28:21.150174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.878 [2024-11-17 01:28:21.257116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.138 [2024-11-17 01:28:21.441370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.138 [2024-11-17 01:28:21.441505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.398 BaseBdev1_malloc 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.398 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.658 true 00:08:13.658 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.658 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:13.658 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.658 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.658 [2024-11-17 01:28:21.868814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:13.658 [2024-11-17 01:28:21.868872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.658 [2024-11-17 01:28:21.868892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:13.659 [2024-11-17 01:28:21.868903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.659 [2024-11-17 01:28:21.870985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.659 [2024-11-17 01:28:21.871071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:13.659 BaseBdev1 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.659 BaseBdev2_malloc 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.659 true 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.659 [2024-11-17 01:28:21.935362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:13.659 [2024-11-17 01:28:21.935419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.659 [2024-11-17 01:28:21.935435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:13.659 [2024-11-17 01:28:21.935446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.659 [2024-11-17 01:28:21.937512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.659 [2024-11-17 01:28:21.937601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:13.659 BaseBdev2 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.659 [2024-11-17 01:28:21.947402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.659 [2024-11-17 01:28:21.949170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.659 [2024-11-17 01:28:21.949356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.659 [2024-11-17 01:28:21.949374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.659 [2024-11-17 01:28:21.949594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:13.659 [2024-11-17 01:28:21.949777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.659 [2024-11-17 01:28:21.949791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:13.659 [2024-11-17 01:28:21.949947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.659 01:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.659 01:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.659 "name": "raid_bdev1", 00:08:13.659 "uuid": "51e78d89-2205-4747-a18d-55a7b3f598e3", 00:08:13.659 "strip_size_kb": 64, 00:08:13.659 "state": "online", 00:08:13.659 "raid_level": "raid0", 00:08:13.659 "superblock": true, 00:08:13.659 "num_base_bdevs": 2, 00:08:13.659 "num_base_bdevs_discovered": 2, 00:08:13.659 "num_base_bdevs_operational": 2, 00:08:13.659 "base_bdevs_list": [ 00:08:13.659 { 00:08:13.659 "name": "BaseBdev1", 00:08:13.659 "uuid": "783758af-ef2a-55ec-bfa4-2c318a3ae0a6", 00:08:13.659 "is_configured": true, 00:08:13.659 "data_offset": 2048, 00:08:13.659 "data_size": 63488 00:08:13.659 }, 00:08:13.659 { 00:08:13.659 "name": "BaseBdev2", 00:08:13.659 "uuid": "ce82feb9-68d9-5076-b1b2-f743f6ec5cd7", 00:08:13.659 "is_configured": true, 00:08:13.659 "data_offset": 2048, 00:08:13.659 "data_size": 63488 00:08:13.659 } 00:08:13.659 ] 00:08:13.659 }' 00:08:13.659 01:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.659 01:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.229 01:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:14.229 01:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:14.229 [2024-11-17 01:28:22.487658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.169 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.170 "name": "raid_bdev1", 00:08:15.170 "uuid": "51e78d89-2205-4747-a18d-55a7b3f598e3", 00:08:15.170 "strip_size_kb": 64, 00:08:15.170 "state": "online", 00:08:15.170 "raid_level": "raid0", 00:08:15.170 "superblock": true, 00:08:15.170 "num_base_bdevs": 2, 00:08:15.170 "num_base_bdevs_discovered": 2, 00:08:15.170 "num_base_bdevs_operational": 2, 00:08:15.170 "base_bdevs_list": [ 00:08:15.170 { 00:08:15.170 "name": "BaseBdev1", 00:08:15.170 "uuid": "783758af-ef2a-55ec-bfa4-2c318a3ae0a6", 00:08:15.170 "is_configured": true, 00:08:15.170 "data_offset": 2048, 00:08:15.170 "data_size": 63488 00:08:15.170 }, 00:08:15.170 { 00:08:15.170 "name": "BaseBdev2", 00:08:15.170 "uuid": "ce82feb9-68d9-5076-b1b2-f743f6ec5cd7", 00:08:15.170 "is_configured": true, 00:08:15.170 "data_offset": 2048, 00:08:15.170 "data_size": 63488 00:08:15.170 } 00:08:15.170 ] 00:08:15.170 }' 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.170 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.430 [2024-11-17 01:28:23.845688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.430 [2024-11-17 01:28:23.845723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.430 [2024-11-17 01:28:23.848253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.430 [2024-11-17 01:28:23.848356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.430 [2024-11-17 01:28:23.848395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.430 [2024-11-17 01:28:23.848407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:15.430 { 00:08:15.430 "results": [ 00:08:15.430 { 00:08:15.430 "job": "raid_bdev1", 00:08:15.430 "core_mask": "0x1", 00:08:15.430 "workload": "randrw", 00:08:15.430 "percentage": 50, 00:08:15.430 "status": "finished", 00:08:15.430 "queue_depth": 1, 00:08:15.430 "io_size": 131072, 00:08:15.430 "runtime": 1.358871, 00:08:15.430 "iops": 16777.16280647685, 00:08:15.430 "mibps": 2097.145350809606, 00:08:15.430 "io_failed": 1, 00:08:15.430 "io_timeout": 0, 00:08:15.430 "avg_latency_us": 82.73301069858461, 00:08:15.430 "min_latency_us": 24.705676855895195, 00:08:15.430 "max_latency_us": 1395.1441048034935 00:08:15.430 } 00:08:15.430 ], 00:08:15.430 "core_count": 1 00:08:15.430 } 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61424 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61424 ']' 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61424 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.430 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61424 00:08:15.689 killing process with pid 61424 00:08:15.689 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.689 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.689 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61424' 00:08:15.689 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61424 00:08:15.689 [2024-11-17 01:28:23.897449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.689 01:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61424 00:08:15.689 [2024-11-17 01:28:24.024142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6l3NfCCyRI 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:17.070 00:08:17.070 real 0m4.257s 00:08:17.070 user 0m5.101s 00:08:17.070 sys 0m0.547s 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.070 ************************************ 00:08:17.070 END TEST raid_write_error_test 00:08:17.070 ************************************ 00:08:17.070 01:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.070 01:28:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:17.070 01:28:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:17.070 01:28:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.070 01:28:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.070 01:28:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.070 ************************************ 00:08:17.070 START TEST raid_state_function_test 00:08:17.070 ************************************ 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61562 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61562' 00:08:17.070 Process raid pid: 61562 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61562 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61562 ']' 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.070 01:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.070 [2024-11-17 01:28:25.301992] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:17.070 [2024-11-17 01:28:25.302131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.070 [2024-11-17 01:28:25.473749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.330 [2024-11-17 01:28:25.578603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.330 [2024-11-17 01:28:25.767228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.330 [2024-11-17 01:28:25.767264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.900 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.900 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.900 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.900 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.900 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.900 [2024-11-17 01:28:26.120955] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.900 [2024-11-17 01:28:26.121074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.900 [2024-11-17 01:28:26.121089] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.900 [2024-11-17 01:28:26.121099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.900 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.901 "name": "Existed_Raid", 00:08:17.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.901 "strip_size_kb": 64, 00:08:17.901 "state": "configuring", 00:08:17.901 "raid_level": "concat", 00:08:17.901 "superblock": false, 00:08:17.901 "num_base_bdevs": 2, 00:08:17.901 "num_base_bdevs_discovered": 0, 00:08:17.901 "num_base_bdevs_operational": 2, 00:08:17.901 "base_bdevs_list": [ 00:08:17.901 { 00:08:17.901 "name": "BaseBdev1", 00:08:17.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.901 "is_configured": false, 00:08:17.901 "data_offset": 0, 00:08:17.901 "data_size": 0 00:08:17.901 }, 00:08:17.901 { 00:08:17.901 "name": "BaseBdev2", 00:08:17.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.901 "is_configured": false, 00:08:17.901 "data_offset": 0, 00:08:17.901 "data_size": 0 00:08:17.901 } 00:08:17.901 ] 00:08:17.901 }' 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.901 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.161 [2024-11-17 01:28:26.556244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.161 [2024-11-17 01:28:26.556344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.161 [2024-11-17 01:28:26.568215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.161 [2024-11-17 01:28:26.568322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.161 [2024-11-17 01:28:26.568358] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.161 [2024-11-17 01:28:26.568387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.161 [2024-11-17 01:28:26.616819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.161 BaseBdev1 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.161 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.421 [ 00:08:18.421 { 00:08:18.421 "name": "BaseBdev1", 00:08:18.421 "aliases": [ 00:08:18.421 "4a62ee61-1825-4477-9c80-95bc3a3db3d0" 00:08:18.421 ], 00:08:18.421 "product_name": "Malloc disk", 00:08:18.421 "block_size": 512, 00:08:18.421 "num_blocks": 65536, 00:08:18.421 "uuid": "4a62ee61-1825-4477-9c80-95bc3a3db3d0", 00:08:18.421 "assigned_rate_limits": { 00:08:18.421 "rw_ios_per_sec": 0, 00:08:18.421 "rw_mbytes_per_sec": 0, 00:08:18.421 "r_mbytes_per_sec": 0, 00:08:18.421 "w_mbytes_per_sec": 0 00:08:18.421 }, 00:08:18.421 "claimed": true, 00:08:18.421 "claim_type": "exclusive_write", 00:08:18.421 "zoned": false, 00:08:18.421 "supported_io_types": { 00:08:18.421 "read": true, 00:08:18.421 "write": true, 00:08:18.421 "unmap": true, 00:08:18.421 "flush": true, 00:08:18.421 "reset": true, 00:08:18.421 "nvme_admin": false, 00:08:18.421 "nvme_io": false, 00:08:18.421 "nvme_io_md": false, 00:08:18.421 "write_zeroes": true, 00:08:18.421 "zcopy": true, 00:08:18.421 "get_zone_info": false, 00:08:18.421 "zone_management": false, 00:08:18.421 "zone_append": false, 00:08:18.421 "compare": false, 00:08:18.421 "compare_and_write": false, 00:08:18.421 "abort": true, 00:08:18.421 "seek_hole": false, 00:08:18.421 "seek_data": false, 00:08:18.421 "copy": true, 00:08:18.421 "nvme_iov_md": false 00:08:18.421 }, 00:08:18.421 "memory_domains": [ 00:08:18.421 { 00:08:18.421 "dma_device_id": "system", 00:08:18.421 "dma_device_type": 1 00:08:18.421 }, 00:08:18.421 { 00:08:18.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.421 "dma_device_type": 2 00:08:18.421 } 00:08:18.421 ], 00:08:18.421 "driver_specific": {} 00:08:18.421 } 00:08:18.421 ] 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.421 "name": "Existed_Raid", 00:08:18.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.421 "strip_size_kb": 64, 00:08:18.421 "state": "configuring", 00:08:18.421 "raid_level": "concat", 00:08:18.421 "superblock": false, 00:08:18.421 "num_base_bdevs": 2, 00:08:18.421 "num_base_bdevs_discovered": 1, 00:08:18.421 "num_base_bdevs_operational": 2, 00:08:18.421 "base_bdevs_list": [ 00:08:18.421 { 00:08:18.421 "name": "BaseBdev1", 00:08:18.421 "uuid": "4a62ee61-1825-4477-9c80-95bc3a3db3d0", 00:08:18.421 "is_configured": true, 00:08:18.421 "data_offset": 0, 00:08:18.421 "data_size": 65536 00:08:18.421 }, 00:08:18.421 { 00:08:18.421 "name": "BaseBdev2", 00:08:18.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.421 "is_configured": false, 00:08:18.421 "data_offset": 0, 00:08:18.421 "data_size": 0 00:08:18.421 } 00:08:18.421 ] 00:08:18.421 }' 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.421 01:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.681 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.681 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.681 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.681 [2024-11-17 01:28:27.119970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.681 [2024-11-17 01:28:27.120073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:18.681 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.681 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:18.681 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.681 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.681 [2024-11-17 01:28:27.131988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.681 [2024-11-17 01:28:27.133761] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.681 [2024-11-17 01:28:27.133816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.940 "name": "Existed_Raid", 00:08:18.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.940 "strip_size_kb": 64, 00:08:18.940 "state": "configuring", 00:08:18.940 "raid_level": "concat", 00:08:18.940 "superblock": false, 00:08:18.940 "num_base_bdevs": 2, 00:08:18.940 "num_base_bdevs_discovered": 1, 00:08:18.940 "num_base_bdevs_operational": 2, 00:08:18.940 "base_bdevs_list": [ 00:08:18.940 { 00:08:18.940 "name": "BaseBdev1", 00:08:18.940 "uuid": "4a62ee61-1825-4477-9c80-95bc3a3db3d0", 00:08:18.940 "is_configured": true, 00:08:18.940 "data_offset": 0, 00:08:18.940 "data_size": 65536 00:08:18.940 }, 00:08:18.940 { 00:08:18.940 "name": "BaseBdev2", 00:08:18.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.940 "is_configured": false, 00:08:18.940 "data_offset": 0, 00:08:18.940 "data_size": 0 00:08:18.940 } 00:08:18.940 ] 00:08:18.940 }' 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.940 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.199 [2024-11-17 01:28:27.578781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.199 [2024-11-17 01:28:27.578902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.199 [2024-11-17 01:28:27.578927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:19.199 [2024-11-17 01:28:27.579240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:19.199 [2024-11-17 01:28:27.579443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.199 [2024-11-17 01:28:27.579489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:19.199 [2024-11-17 01:28:27.579803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.199 BaseBdev2 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.199 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.200 [ 00:08:19.200 { 00:08:19.200 "name": "BaseBdev2", 00:08:19.200 "aliases": [ 00:08:19.200 "35f18dc9-f092-42d2-98c8-0c7b28962be4" 00:08:19.200 ], 00:08:19.200 "product_name": "Malloc disk", 00:08:19.200 "block_size": 512, 00:08:19.200 "num_blocks": 65536, 00:08:19.200 "uuid": "35f18dc9-f092-42d2-98c8-0c7b28962be4", 00:08:19.200 "assigned_rate_limits": { 00:08:19.200 "rw_ios_per_sec": 0, 00:08:19.200 "rw_mbytes_per_sec": 0, 00:08:19.200 "r_mbytes_per_sec": 0, 00:08:19.200 "w_mbytes_per_sec": 0 00:08:19.200 }, 00:08:19.200 "claimed": true, 00:08:19.200 "claim_type": "exclusive_write", 00:08:19.200 "zoned": false, 00:08:19.200 "supported_io_types": { 00:08:19.200 "read": true, 00:08:19.200 "write": true, 00:08:19.200 "unmap": true, 00:08:19.200 "flush": true, 00:08:19.200 "reset": true, 00:08:19.200 "nvme_admin": false, 00:08:19.200 "nvme_io": false, 00:08:19.200 "nvme_io_md": false, 00:08:19.200 "write_zeroes": true, 00:08:19.200 "zcopy": true, 00:08:19.200 "get_zone_info": false, 00:08:19.200 "zone_management": false, 00:08:19.200 "zone_append": false, 00:08:19.200 "compare": false, 00:08:19.200 "compare_and_write": false, 00:08:19.200 "abort": true, 00:08:19.200 "seek_hole": false, 00:08:19.200 "seek_data": false, 00:08:19.200 "copy": true, 00:08:19.200 "nvme_iov_md": false 00:08:19.200 }, 00:08:19.200 "memory_domains": [ 00:08:19.200 { 00:08:19.200 "dma_device_id": "system", 00:08:19.200 "dma_device_type": 1 00:08:19.200 }, 00:08:19.200 { 00:08:19.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.200 "dma_device_type": 2 00:08:19.200 } 00:08:19.200 ], 00:08:19.200 "driver_specific": {} 00:08:19.200 } 00:08:19.200 ] 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.200 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.459 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.459 "name": "Existed_Raid", 00:08:19.459 "uuid": "5136a571-fe3e-42c0-9fa8-4df094f14d53", 00:08:19.459 "strip_size_kb": 64, 00:08:19.459 "state": "online", 00:08:19.459 "raid_level": "concat", 00:08:19.459 "superblock": false, 00:08:19.459 "num_base_bdevs": 2, 00:08:19.459 "num_base_bdevs_discovered": 2, 00:08:19.459 "num_base_bdevs_operational": 2, 00:08:19.459 "base_bdevs_list": [ 00:08:19.459 { 00:08:19.459 "name": "BaseBdev1", 00:08:19.459 "uuid": "4a62ee61-1825-4477-9c80-95bc3a3db3d0", 00:08:19.459 "is_configured": true, 00:08:19.459 "data_offset": 0, 00:08:19.459 "data_size": 65536 00:08:19.459 }, 00:08:19.459 { 00:08:19.459 "name": "BaseBdev2", 00:08:19.459 "uuid": "35f18dc9-f092-42d2-98c8-0c7b28962be4", 00:08:19.460 "is_configured": true, 00:08:19.460 "data_offset": 0, 00:08:19.460 "data_size": 65536 00:08:19.460 } 00:08:19.460 ] 00:08:19.460 }' 00:08:19.460 01:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.460 01:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.719 [2024-11-17 01:28:28.066249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.719 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.719 "name": "Existed_Raid", 00:08:19.719 "aliases": [ 00:08:19.719 "5136a571-fe3e-42c0-9fa8-4df094f14d53" 00:08:19.719 ], 00:08:19.719 "product_name": "Raid Volume", 00:08:19.719 "block_size": 512, 00:08:19.719 "num_blocks": 131072, 00:08:19.719 "uuid": "5136a571-fe3e-42c0-9fa8-4df094f14d53", 00:08:19.719 "assigned_rate_limits": { 00:08:19.719 "rw_ios_per_sec": 0, 00:08:19.719 "rw_mbytes_per_sec": 0, 00:08:19.719 "r_mbytes_per_sec": 0, 00:08:19.719 "w_mbytes_per_sec": 0 00:08:19.719 }, 00:08:19.719 "claimed": false, 00:08:19.719 "zoned": false, 00:08:19.719 "supported_io_types": { 00:08:19.719 "read": true, 00:08:19.719 "write": true, 00:08:19.719 "unmap": true, 00:08:19.719 "flush": true, 00:08:19.719 "reset": true, 00:08:19.719 "nvme_admin": false, 00:08:19.719 "nvme_io": false, 00:08:19.719 "nvme_io_md": false, 00:08:19.719 "write_zeroes": true, 00:08:19.720 "zcopy": false, 00:08:19.720 "get_zone_info": false, 00:08:19.720 "zone_management": false, 00:08:19.720 "zone_append": false, 00:08:19.720 "compare": false, 00:08:19.720 "compare_and_write": false, 00:08:19.720 "abort": false, 00:08:19.720 "seek_hole": false, 00:08:19.720 "seek_data": false, 00:08:19.720 "copy": false, 00:08:19.720 "nvme_iov_md": false 00:08:19.720 }, 00:08:19.720 "memory_domains": [ 00:08:19.720 { 00:08:19.720 "dma_device_id": "system", 00:08:19.720 "dma_device_type": 1 00:08:19.720 }, 00:08:19.720 { 00:08:19.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.720 "dma_device_type": 2 00:08:19.720 }, 00:08:19.720 { 00:08:19.720 "dma_device_id": "system", 00:08:19.720 "dma_device_type": 1 00:08:19.720 }, 00:08:19.720 { 00:08:19.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.720 "dma_device_type": 2 00:08:19.720 } 00:08:19.720 ], 00:08:19.720 "driver_specific": { 00:08:19.720 "raid": { 00:08:19.720 "uuid": "5136a571-fe3e-42c0-9fa8-4df094f14d53", 00:08:19.720 "strip_size_kb": 64, 00:08:19.720 "state": "online", 00:08:19.720 "raid_level": "concat", 00:08:19.720 "superblock": false, 00:08:19.720 "num_base_bdevs": 2, 00:08:19.720 "num_base_bdevs_discovered": 2, 00:08:19.720 "num_base_bdevs_operational": 2, 00:08:19.720 "base_bdevs_list": [ 00:08:19.720 { 00:08:19.720 "name": "BaseBdev1", 00:08:19.720 "uuid": "4a62ee61-1825-4477-9c80-95bc3a3db3d0", 00:08:19.720 "is_configured": true, 00:08:19.720 "data_offset": 0, 00:08:19.720 "data_size": 65536 00:08:19.720 }, 00:08:19.720 { 00:08:19.720 "name": "BaseBdev2", 00:08:19.720 "uuid": "35f18dc9-f092-42d2-98c8-0c7b28962be4", 00:08:19.720 "is_configured": true, 00:08:19.720 "data_offset": 0, 00:08:19.720 "data_size": 65536 00:08:19.720 } 00:08:19.720 ] 00:08:19.720 } 00:08:19.720 } 00:08:19.720 }' 00:08:19.720 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.720 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:19.720 BaseBdev2' 00:08:19.720 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.979 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.979 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 [2024-11-17 01:28:28.313557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:19.980 [2024-11-17 01:28:28.313590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.980 [2024-11-17 01:28:28.313638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.980 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.239 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.239 "name": "Existed_Raid", 00:08:20.239 "uuid": "5136a571-fe3e-42c0-9fa8-4df094f14d53", 00:08:20.239 "strip_size_kb": 64, 00:08:20.239 "state": "offline", 00:08:20.239 "raid_level": "concat", 00:08:20.239 "superblock": false, 00:08:20.239 "num_base_bdevs": 2, 00:08:20.239 "num_base_bdevs_discovered": 1, 00:08:20.239 "num_base_bdevs_operational": 1, 00:08:20.239 "base_bdevs_list": [ 00:08:20.239 { 00:08:20.239 "name": null, 00:08:20.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.239 "is_configured": false, 00:08:20.239 "data_offset": 0, 00:08:20.239 "data_size": 65536 00:08:20.239 }, 00:08:20.239 { 00:08:20.239 "name": "BaseBdev2", 00:08:20.239 "uuid": "35f18dc9-f092-42d2-98c8-0c7b28962be4", 00:08:20.239 "is_configured": true, 00:08:20.239 "data_offset": 0, 00:08:20.239 "data_size": 65536 00:08:20.239 } 00:08:20.239 ] 00:08:20.239 }' 00:08:20.239 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.239 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.499 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.499 [2024-11-17 01:28:28.887700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:20.499 [2024-11-17 01:28:28.887814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.759 01:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61562 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61562 ']' 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61562 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61562 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.759 killing process with pid 61562 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61562' 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61562 00:08:20.759 [2024-11-17 01:28:29.073916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.759 01:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61562 00:08:20.759 [2024-11-17 01:28:29.091178] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.696 01:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:21.696 ************************************ 00:08:21.696 END TEST raid_state_function_test 00:08:21.696 ************************************ 00:08:21.696 00:08:21.696 real 0m4.933s 00:08:21.696 user 0m7.152s 00:08:21.696 sys 0m0.808s 00:08:21.696 01:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.696 01:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.956 01:28:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:21.956 01:28:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.956 01:28:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.956 01:28:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.956 ************************************ 00:08:21.956 START TEST raid_state_function_test_sb 00:08:21.956 ************************************ 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:21.956 Process raid pid: 61814 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61814 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61814' 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61814 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61814 ']' 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.956 01:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.956 [2024-11-17 01:28:30.308965] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:21.956 [2024-11-17 01:28:30.309109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.216 [2024-11-17 01:28:30.481287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.216 [2024-11-17 01:28:30.596274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.476 [2024-11-17 01:28:30.781807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.476 [2024-11-17 01:28:30.781935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.736 [2024-11-17 01:28:31.134722] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.736 [2024-11-17 01:28:31.134837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.736 [2024-11-17 01:28:31.134853] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.736 [2024-11-17 01:28:31.134863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.736 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.737 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.737 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.737 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.737 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.737 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.737 "name": "Existed_Raid", 00:08:22.737 "uuid": "340a03c0-f398-429f-b9f4-7814a73ffea2", 00:08:22.737 "strip_size_kb": 64, 00:08:22.737 "state": "configuring", 00:08:22.737 "raid_level": "concat", 00:08:22.737 "superblock": true, 00:08:22.737 "num_base_bdevs": 2, 00:08:22.737 "num_base_bdevs_discovered": 0, 00:08:22.737 "num_base_bdevs_operational": 2, 00:08:22.737 "base_bdevs_list": [ 00:08:22.737 { 00:08:22.737 "name": "BaseBdev1", 00:08:22.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.737 "is_configured": false, 00:08:22.737 "data_offset": 0, 00:08:22.737 "data_size": 0 00:08:22.737 }, 00:08:22.737 { 00:08:22.737 "name": "BaseBdev2", 00:08:22.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.737 "is_configured": false, 00:08:22.737 "data_offset": 0, 00:08:22.737 "data_size": 0 00:08:22.737 } 00:08:22.737 ] 00:08:22.737 }' 00:08:22.737 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.737 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.306 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.306 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.306 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.306 [2024-11-17 01:28:31.573915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.306 [2024-11-17 01:28:31.574022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:23.306 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.306 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.306 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.306 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.306 [2024-11-17 01:28:31.585927] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.306 [2024-11-17 01:28:31.585977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.307 [2024-11-17 01:28:31.585986] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.307 [2024-11-17 01:28:31.586013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.307 [2024-11-17 01:28:31.633250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.307 BaseBdev1 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.307 [ 00:08:23.307 { 00:08:23.307 "name": "BaseBdev1", 00:08:23.307 "aliases": [ 00:08:23.307 "dd3900eb-650e-448e-be20-40decb379e54" 00:08:23.307 ], 00:08:23.307 "product_name": "Malloc disk", 00:08:23.307 "block_size": 512, 00:08:23.307 "num_blocks": 65536, 00:08:23.307 "uuid": "dd3900eb-650e-448e-be20-40decb379e54", 00:08:23.307 "assigned_rate_limits": { 00:08:23.307 "rw_ios_per_sec": 0, 00:08:23.307 "rw_mbytes_per_sec": 0, 00:08:23.307 "r_mbytes_per_sec": 0, 00:08:23.307 "w_mbytes_per_sec": 0 00:08:23.307 }, 00:08:23.307 "claimed": true, 00:08:23.307 "claim_type": "exclusive_write", 00:08:23.307 "zoned": false, 00:08:23.307 "supported_io_types": { 00:08:23.307 "read": true, 00:08:23.307 "write": true, 00:08:23.307 "unmap": true, 00:08:23.307 "flush": true, 00:08:23.307 "reset": true, 00:08:23.307 "nvme_admin": false, 00:08:23.307 "nvme_io": false, 00:08:23.307 "nvme_io_md": false, 00:08:23.307 "write_zeroes": true, 00:08:23.307 "zcopy": true, 00:08:23.307 "get_zone_info": false, 00:08:23.307 "zone_management": false, 00:08:23.307 "zone_append": false, 00:08:23.307 "compare": false, 00:08:23.307 "compare_and_write": false, 00:08:23.307 "abort": true, 00:08:23.307 "seek_hole": false, 00:08:23.307 "seek_data": false, 00:08:23.307 "copy": true, 00:08:23.307 "nvme_iov_md": false 00:08:23.307 }, 00:08:23.307 "memory_domains": [ 00:08:23.307 { 00:08:23.307 "dma_device_id": "system", 00:08:23.307 "dma_device_type": 1 00:08:23.307 }, 00:08:23.307 { 00:08:23.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.307 "dma_device_type": 2 00:08:23.307 } 00:08:23.307 ], 00:08:23.307 "driver_specific": {} 00:08:23.307 } 00:08:23.307 ] 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.307 "name": "Existed_Raid", 00:08:23.307 "uuid": "6a689d0c-ee22-4c11-adb1-4f8f443d5921", 00:08:23.307 "strip_size_kb": 64, 00:08:23.307 "state": "configuring", 00:08:23.307 "raid_level": "concat", 00:08:23.307 "superblock": true, 00:08:23.307 "num_base_bdevs": 2, 00:08:23.307 "num_base_bdevs_discovered": 1, 00:08:23.307 "num_base_bdevs_operational": 2, 00:08:23.307 "base_bdevs_list": [ 00:08:23.307 { 00:08:23.307 "name": "BaseBdev1", 00:08:23.307 "uuid": "dd3900eb-650e-448e-be20-40decb379e54", 00:08:23.307 "is_configured": true, 00:08:23.307 "data_offset": 2048, 00:08:23.307 "data_size": 63488 00:08:23.307 }, 00:08:23.307 { 00:08:23.307 "name": "BaseBdev2", 00:08:23.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.307 "is_configured": false, 00:08:23.307 "data_offset": 0, 00:08:23.307 "data_size": 0 00:08:23.307 } 00:08:23.307 ] 00:08:23.307 }' 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.307 01:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.877 [2024-11-17 01:28:32.100488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.877 [2024-11-17 01:28:32.100545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.877 [2024-11-17 01:28:32.112525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.877 [2024-11-17 01:28:32.114342] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.877 [2024-11-17 01:28:32.114386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.877 "name": "Existed_Raid", 00:08:23.877 "uuid": "fc67139e-062b-45fb-afec-ee3f20f52f97", 00:08:23.877 "strip_size_kb": 64, 00:08:23.877 "state": "configuring", 00:08:23.877 "raid_level": "concat", 00:08:23.877 "superblock": true, 00:08:23.877 "num_base_bdevs": 2, 00:08:23.877 "num_base_bdevs_discovered": 1, 00:08:23.877 "num_base_bdevs_operational": 2, 00:08:23.877 "base_bdevs_list": [ 00:08:23.877 { 00:08:23.877 "name": "BaseBdev1", 00:08:23.877 "uuid": "dd3900eb-650e-448e-be20-40decb379e54", 00:08:23.877 "is_configured": true, 00:08:23.877 "data_offset": 2048, 00:08:23.877 "data_size": 63488 00:08:23.877 }, 00:08:23.877 { 00:08:23.877 "name": "BaseBdev2", 00:08:23.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.877 "is_configured": false, 00:08:23.877 "data_offset": 0, 00:08:23.877 "data_size": 0 00:08:23.877 } 00:08:23.877 ] 00:08:23.877 }' 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.877 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.137 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.137 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.137 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.396 [2024-11-17 01:28:32.597862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.396 [2024-11-17 01:28:32.598088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:24.396 [2024-11-17 01:28:32.598109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:24.396 [2024-11-17 01:28:32.598377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:24.396 [2024-11-17 01:28:32.598522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:24.396 [2024-11-17 01:28:32.598534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:24.396 BaseBdev2 00:08:24.396 [2024-11-17 01:28:32.598664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.396 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.396 [ 00:08:24.396 { 00:08:24.397 "name": "BaseBdev2", 00:08:24.397 "aliases": [ 00:08:24.397 "b30a4eb4-d3e6-49d6-9628-95ebddb432ae" 00:08:24.397 ], 00:08:24.397 "product_name": "Malloc disk", 00:08:24.397 "block_size": 512, 00:08:24.397 "num_blocks": 65536, 00:08:24.397 "uuid": "b30a4eb4-d3e6-49d6-9628-95ebddb432ae", 00:08:24.397 "assigned_rate_limits": { 00:08:24.397 "rw_ios_per_sec": 0, 00:08:24.397 "rw_mbytes_per_sec": 0, 00:08:24.397 "r_mbytes_per_sec": 0, 00:08:24.397 "w_mbytes_per_sec": 0 00:08:24.397 }, 00:08:24.397 "claimed": true, 00:08:24.397 "claim_type": "exclusive_write", 00:08:24.397 "zoned": false, 00:08:24.397 "supported_io_types": { 00:08:24.397 "read": true, 00:08:24.397 "write": true, 00:08:24.397 "unmap": true, 00:08:24.397 "flush": true, 00:08:24.397 "reset": true, 00:08:24.397 "nvme_admin": false, 00:08:24.397 "nvme_io": false, 00:08:24.397 "nvme_io_md": false, 00:08:24.397 "write_zeroes": true, 00:08:24.397 "zcopy": true, 00:08:24.397 "get_zone_info": false, 00:08:24.397 "zone_management": false, 00:08:24.397 "zone_append": false, 00:08:24.397 "compare": false, 00:08:24.397 "compare_and_write": false, 00:08:24.397 "abort": true, 00:08:24.397 "seek_hole": false, 00:08:24.397 "seek_data": false, 00:08:24.397 "copy": true, 00:08:24.397 "nvme_iov_md": false 00:08:24.397 }, 00:08:24.397 "memory_domains": [ 00:08:24.397 { 00:08:24.397 "dma_device_id": "system", 00:08:24.397 "dma_device_type": 1 00:08:24.397 }, 00:08:24.397 { 00:08:24.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.397 "dma_device_type": 2 00:08:24.397 } 00:08:24.397 ], 00:08:24.397 "driver_specific": {} 00:08:24.397 } 00:08:24.397 ] 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.397 "name": "Existed_Raid", 00:08:24.397 "uuid": "fc67139e-062b-45fb-afec-ee3f20f52f97", 00:08:24.397 "strip_size_kb": 64, 00:08:24.397 "state": "online", 00:08:24.397 "raid_level": "concat", 00:08:24.397 "superblock": true, 00:08:24.397 "num_base_bdevs": 2, 00:08:24.397 "num_base_bdevs_discovered": 2, 00:08:24.397 "num_base_bdevs_operational": 2, 00:08:24.397 "base_bdevs_list": [ 00:08:24.397 { 00:08:24.397 "name": "BaseBdev1", 00:08:24.397 "uuid": "dd3900eb-650e-448e-be20-40decb379e54", 00:08:24.397 "is_configured": true, 00:08:24.397 "data_offset": 2048, 00:08:24.397 "data_size": 63488 00:08:24.397 }, 00:08:24.397 { 00:08:24.397 "name": "BaseBdev2", 00:08:24.397 "uuid": "b30a4eb4-d3e6-49d6-9628-95ebddb432ae", 00:08:24.397 "is_configured": true, 00:08:24.397 "data_offset": 2048, 00:08:24.397 "data_size": 63488 00:08:24.397 } 00:08:24.397 ] 00:08:24.397 }' 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.397 01:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.657 [2024-11-17 01:28:33.057364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.657 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.657 "name": "Existed_Raid", 00:08:24.657 "aliases": [ 00:08:24.657 "fc67139e-062b-45fb-afec-ee3f20f52f97" 00:08:24.657 ], 00:08:24.657 "product_name": "Raid Volume", 00:08:24.657 "block_size": 512, 00:08:24.657 "num_blocks": 126976, 00:08:24.657 "uuid": "fc67139e-062b-45fb-afec-ee3f20f52f97", 00:08:24.657 "assigned_rate_limits": { 00:08:24.657 "rw_ios_per_sec": 0, 00:08:24.657 "rw_mbytes_per_sec": 0, 00:08:24.657 "r_mbytes_per_sec": 0, 00:08:24.657 "w_mbytes_per_sec": 0 00:08:24.657 }, 00:08:24.657 "claimed": false, 00:08:24.657 "zoned": false, 00:08:24.657 "supported_io_types": { 00:08:24.657 "read": true, 00:08:24.657 "write": true, 00:08:24.657 "unmap": true, 00:08:24.657 "flush": true, 00:08:24.657 "reset": true, 00:08:24.657 "nvme_admin": false, 00:08:24.657 "nvme_io": false, 00:08:24.657 "nvme_io_md": false, 00:08:24.657 "write_zeroes": true, 00:08:24.657 "zcopy": false, 00:08:24.657 "get_zone_info": false, 00:08:24.657 "zone_management": false, 00:08:24.657 "zone_append": false, 00:08:24.657 "compare": false, 00:08:24.657 "compare_and_write": false, 00:08:24.657 "abort": false, 00:08:24.657 "seek_hole": false, 00:08:24.657 "seek_data": false, 00:08:24.657 "copy": false, 00:08:24.657 "nvme_iov_md": false 00:08:24.657 }, 00:08:24.657 "memory_domains": [ 00:08:24.657 { 00:08:24.657 "dma_device_id": "system", 00:08:24.657 "dma_device_type": 1 00:08:24.657 }, 00:08:24.657 { 00:08:24.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.658 "dma_device_type": 2 00:08:24.658 }, 00:08:24.658 { 00:08:24.658 "dma_device_id": "system", 00:08:24.658 "dma_device_type": 1 00:08:24.658 }, 00:08:24.658 { 00:08:24.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.658 "dma_device_type": 2 00:08:24.658 } 00:08:24.658 ], 00:08:24.658 "driver_specific": { 00:08:24.658 "raid": { 00:08:24.658 "uuid": "fc67139e-062b-45fb-afec-ee3f20f52f97", 00:08:24.658 "strip_size_kb": 64, 00:08:24.658 "state": "online", 00:08:24.658 "raid_level": "concat", 00:08:24.658 "superblock": true, 00:08:24.658 "num_base_bdevs": 2, 00:08:24.658 "num_base_bdevs_discovered": 2, 00:08:24.658 "num_base_bdevs_operational": 2, 00:08:24.658 "base_bdevs_list": [ 00:08:24.658 { 00:08:24.658 "name": "BaseBdev1", 00:08:24.658 "uuid": "dd3900eb-650e-448e-be20-40decb379e54", 00:08:24.658 "is_configured": true, 00:08:24.658 "data_offset": 2048, 00:08:24.658 "data_size": 63488 00:08:24.658 }, 00:08:24.658 { 00:08:24.658 "name": "BaseBdev2", 00:08:24.658 "uuid": "b30a4eb4-d3e6-49d6-9628-95ebddb432ae", 00:08:24.658 "is_configured": true, 00:08:24.658 "data_offset": 2048, 00:08:24.658 "data_size": 63488 00:08:24.658 } 00:08:24.658 ] 00:08:24.658 } 00:08:24.658 } 00:08:24.658 }' 00:08:24.658 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.919 BaseBdev2' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.919 [2024-11-17 01:28:33.260783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.919 [2024-11-17 01:28:33.260814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.919 [2024-11-17 01:28:33.260862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:24.919 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.920 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.179 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.179 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.179 "name": "Existed_Raid", 00:08:25.179 "uuid": "fc67139e-062b-45fb-afec-ee3f20f52f97", 00:08:25.179 "strip_size_kb": 64, 00:08:25.179 "state": "offline", 00:08:25.179 "raid_level": "concat", 00:08:25.179 "superblock": true, 00:08:25.179 "num_base_bdevs": 2, 00:08:25.179 "num_base_bdevs_discovered": 1, 00:08:25.179 "num_base_bdevs_operational": 1, 00:08:25.179 "base_bdevs_list": [ 00:08:25.179 { 00:08:25.179 "name": null, 00:08:25.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.179 "is_configured": false, 00:08:25.179 "data_offset": 0, 00:08:25.179 "data_size": 63488 00:08:25.179 }, 00:08:25.179 { 00:08:25.179 "name": "BaseBdev2", 00:08:25.179 "uuid": "b30a4eb4-d3e6-49d6-9628-95ebddb432ae", 00:08:25.179 "is_configured": true, 00:08:25.179 "data_offset": 2048, 00:08:25.179 "data_size": 63488 00:08:25.179 } 00:08:25.179 ] 00:08:25.179 }' 00:08:25.179 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.179 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.438 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.438 [2024-11-17 01:28:33.832157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.438 [2024-11-17 01:28:33.832211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61814 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61814 ']' 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61814 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61814 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61814' 00:08:25.698 killing process with pid 61814 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61814 00:08:25.698 [2024-11-17 01:28:33.994653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.698 01:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61814 00:08:25.698 [2024-11-17 01:28:34.011208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.636 01:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:26.636 00:08:26.636 real 0m4.859s 00:08:26.636 user 0m7.033s 00:08:26.636 sys 0m0.781s 00:08:26.636 ************************************ 00:08:26.636 END TEST raid_state_function_test_sb 00:08:26.636 ************************************ 00:08:26.636 01:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.636 01:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.895 01:28:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:26.895 01:28:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:26.895 01:28:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.895 01:28:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.895 ************************************ 00:08:26.895 START TEST raid_superblock_test 00:08:26.895 ************************************ 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62056 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62056 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62056 ']' 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.895 01:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.896 01:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.896 01:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.896 01:28:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.896 [2024-11-17 01:28:35.233945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:26.896 [2024-11-17 01:28:35.234122] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62056 ] 00:08:27.155 [2024-11-17 01:28:35.389460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.155 [2024-11-17 01:28:35.497672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.415 [2024-11-17 01:28:35.681484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.415 [2024-11-17 01:28:35.681590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.675 malloc1 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.675 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.676 [2024-11-17 01:28:36.101742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.676 [2024-11-17 01:28:36.101815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.676 [2024-11-17 01:28:36.101839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:27.676 [2024-11-17 01:28:36.101855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.676 [2024-11-17 01:28:36.103869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.676 [2024-11-17 01:28:36.103904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.676 pt1 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.676 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.936 malloc2 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.936 [2024-11-17 01:28:36.158583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.936 [2024-11-17 01:28:36.158677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.936 [2024-11-17 01:28:36.158731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:27.936 [2024-11-17 01:28:36.158768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.936 [2024-11-17 01:28:36.160748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.936 [2024-11-17 01:28:36.160844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.936 pt2 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.936 [2024-11-17 01:28:36.170612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.936 [2024-11-17 01:28:36.172377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.936 [2024-11-17 01:28:36.172570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:27.936 [2024-11-17 01:28:36.172614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:27.936 [2024-11-17 01:28:36.172883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:27.936 [2024-11-17 01:28:36.173063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:27.936 [2024-11-17 01:28:36.173104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:27.936 [2024-11-17 01:28:36.173277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.936 "name": "raid_bdev1", 00:08:27.936 "uuid": "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3", 00:08:27.936 "strip_size_kb": 64, 00:08:27.936 "state": "online", 00:08:27.936 "raid_level": "concat", 00:08:27.936 "superblock": true, 00:08:27.936 "num_base_bdevs": 2, 00:08:27.936 "num_base_bdevs_discovered": 2, 00:08:27.936 "num_base_bdevs_operational": 2, 00:08:27.936 "base_bdevs_list": [ 00:08:27.936 { 00:08:27.936 "name": "pt1", 00:08:27.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.936 "is_configured": true, 00:08:27.936 "data_offset": 2048, 00:08:27.936 "data_size": 63488 00:08:27.936 }, 00:08:27.936 { 00:08:27.936 "name": "pt2", 00:08:27.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.936 "is_configured": true, 00:08:27.936 "data_offset": 2048, 00:08:27.936 "data_size": 63488 00:08:27.936 } 00:08:27.936 ] 00:08:27.936 }' 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.936 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.196 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.196 [2024-11-17 01:28:36.650052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.456 "name": "raid_bdev1", 00:08:28.456 "aliases": [ 00:08:28.456 "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3" 00:08:28.456 ], 00:08:28.456 "product_name": "Raid Volume", 00:08:28.456 "block_size": 512, 00:08:28.456 "num_blocks": 126976, 00:08:28.456 "uuid": "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3", 00:08:28.456 "assigned_rate_limits": { 00:08:28.456 "rw_ios_per_sec": 0, 00:08:28.456 "rw_mbytes_per_sec": 0, 00:08:28.456 "r_mbytes_per_sec": 0, 00:08:28.456 "w_mbytes_per_sec": 0 00:08:28.456 }, 00:08:28.456 "claimed": false, 00:08:28.456 "zoned": false, 00:08:28.456 "supported_io_types": { 00:08:28.456 "read": true, 00:08:28.456 "write": true, 00:08:28.456 "unmap": true, 00:08:28.456 "flush": true, 00:08:28.456 "reset": true, 00:08:28.456 "nvme_admin": false, 00:08:28.456 "nvme_io": false, 00:08:28.456 "nvme_io_md": false, 00:08:28.456 "write_zeroes": true, 00:08:28.456 "zcopy": false, 00:08:28.456 "get_zone_info": false, 00:08:28.456 "zone_management": false, 00:08:28.456 "zone_append": false, 00:08:28.456 "compare": false, 00:08:28.456 "compare_and_write": false, 00:08:28.456 "abort": false, 00:08:28.456 "seek_hole": false, 00:08:28.456 "seek_data": false, 00:08:28.456 "copy": false, 00:08:28.456 "nvme_iov_md": false 00:08:28.456 }, 00:08:28.456 "memory_domains": [ 00:08:28.456 { 00:08:28.456 "dma_device_id": "system", 00:08:28.456 "dma_device_type": 1 00:08:28.456 }, 00:08:28.456 { 00:08:28.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.456 "dma_device_type": 2 00:08:28.456 }, 00:08:28.456 { 00:08:28.456 "dma_device_id": "system", 00:08:28.456 "dma_device_type": 1 00:08:28.456 }, 00:08:28.456 { 00:08:28.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.456 "dma_device_type": 2 00:08:28.456 } 00:08:28.456 ], 00:08:28.456 "driver_specific": { 00:08:28.456 "raid": { 00:08:28.456 "uuid": "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3", 00:08:28.456 "strip_size_kb": 64, 00:08:28.456 "state": "online", 00:08:28.456 "raid_level": "concat", 00:08:28.456 "superblock": true, 00:08:28.456 "num_base_bdevs": 2, 00:08:28.456 "num_base_bdevs_discovered": 2, 00:08:28.456 "num_base_bdevs_operational": 2, 00:08:28.456 "base_bdevs_list": [ 00:08:28.456 { 00:08:28.456 "name": "pt1", 00:08:28.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.456 "is_configured": true, 00:08:28.456 "data_offset": 2048, 00:08:28.456 "data_size": 63488 00:08:28.456 }, 00:08:28.456 { 00:08:28.456 "name": "pt2", 00:08:28.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.456 "is_configured": true, 00:08:28.456 "data_offset": 2048, 00:08:28.456 "data_size": 63488 00:08:28.456 } 00:08:28.456 ] 00:08:28.456 } 00:08:28.456 } 00:08:28.456 }' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.456 pt2' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:28.456 [2024-11-17 01:28:36.873592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3 ']' 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.456 [2024-11-17 01:28:36.901284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.456 [2024-11-17 01:28:36.901307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.456 [2024-11-17 01:28:36.901382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.456 [2024-11-17 01:28:36.901428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.456 [2024-11-17 01:28:36.901439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.456 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:28.716 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.716 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:28.716 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:28.716 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.716 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:28.717 01:28:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 [2024-11-17 01:28:37.037132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:28.717 [2024-11-17 01:28:37.038934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:28.717 [2024-11-17 01:28:37.038999] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:28.717 [2024-11-17 01:28:37.039063] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:28.717 [2024-11-17 01:28:37.039078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.717 [2024-11-17 01:28:37.039089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:28.717 request: 00:08:28.717 { 00:08:28.717 "name": "raid_bdev1", 00:08:28.717 "raid_level": "concat", 00:08:28.717 "base_bdevs": [ 00:08:28.717 "malloc1", 00:08:28.717 "malloc2" 00:08:28.717 ], 00:08:28.717 "strip_size_kb": 64, 00:08:28.717 "superblock": false, 00:08:28.717 "method": "bdev_raid_create", 00:08:28.717 "req_id": 1 00:08:28.717 } 00:08:28.717 Got JSON-RPC error response 00:08:28.717 response: 00:08:28.717 { 00:08:28.717 "code": -17, 00:08:28.717 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:28.717 } 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 [2024-11-17 01:28:37.100945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:28.717 [2024-11-17 01:28:37.101036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.717 [2024-11-17 01:28:37.101073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:28.717 [2024-11-17 01:28:37.101102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.717 [2024-11-17 01:28:37.103132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.717 [2024-11-17 01:28:37.103202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:28.717 [2024-11-17 01:28:37.103293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:28.717 [2024-11-17 01:28:37.103368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:28.717 pt1 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.717 "name": "raid_bdev1", 00:08:28.717 "uuid": "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3", 00:08:28.717 "strip_size_kb": 64, 00:08:28.717 "state": "configuring", 00:08:28.717 "raid_level": "concat", 00:08:28.717 "superblock": true, 00:08:28.717 "num_base_bdevs": 2, 00:08:28.717 "num_base_bdevs_discovered": 1, 00:08:28.717 "num_base_bdevs_operational": 2, 00:08:28.717 "base_bdevs_list": [ 00:08:28.717 { 00:08:28.717 "name": "pt1", 00:08:28.717 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.717 "is_configured": true, 00:08:28.717 "data_offset": 2048, 00:08:28.717 "data_size": 63488 00:08:28.717 }, 00:08:28.717 { 00:08:28.717 "name": null, 00:08:28.717 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.717 "is_configured": false, 00:08:28.717 "data_offset": 2048, 00:08:28.717 "data_size": 63488 00:08:28.717 } 00:08:28.717 ] 00:08:28.717 }' 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.717 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.288 [2024-11-17 01:28:37.516290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.288 [2024-11-17 01:28:37.516369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.288 [2024-11-17 01:28:37.516390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:29.288 [2024-11-17 01:28:37.516400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.288 [2024-11-17 01:28:37.516858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.288 [2024-11-17 01:28:37.516891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.288 [2024-11-17 01:28:37.516976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.288 [2024-11-17 01:28:37.517000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.288 [2024-11-17 01:28:37.517117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.288 [2024-11-17 01:28:37.517128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:29.288 [2024-11-17 01:28:37.517350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.288 [2024-11-17 01:28:37.517489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.288 [2024-11-17 01:28:37.517498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:29.288 [2024-11-17 01:28:37.517641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.288 pt2 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.288 "name": "raid_bdev1", 00:08:29.288 "uuid": "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3", 00:08:29.288 "strip_size_kb": 64, 00:08:29.288 "state": "online", 00:08:29.288 "raid_level": "concat", 00:08:29.288 "superblock": true, 00:08:29.288 "num_base_bdevs": 2, 00:08:29.288 "num_base_bdevs_discovered": 2, 00:08:29.288 "num_base_bdevs_operational": 2, 00:08:29.288 "base_bdevs_list": [ 00:08:29.288 { 00:08:29.288 "name": "pt1", 00:08:29.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.288 "is_configured": true, 00:08:29.288 "data_offset": 2048, 00:08:29.288 "data_size": 63488 00:08:29.288 }, 00:08:29.288 { 00:08:29.288 "name": "pt2", 00:08:29.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.288 "is_configured": true, 00:08:29.288 "data_offset": 2048, 00:08:29.288 "data_size": 63488 00:08:29.288 } 00:08:29.288 ] 00:08:29.288 }' 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.288 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.548 01:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.548 [2024-11-17 01:28:37.999663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.808 "name": "raid_bdev1", 00:08:29.808 "aliases": [ 00:08:29.808 "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3" 00:08:29.808 ], 00:08:29.808 "product_name": "Raid Volume", 00:08:29.808 "block_size": 512, 00:08:29.808 "num_blocks": 126976, 00:08:29.808 "uuid": "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3", 00:08:29.808 "assigned_rate_limits": { 00:08:29.808 "rw_ios_per_sec": 0, 00:08:29.808 "rw_mbytes_per_sec": 0, 00:08:29.808 "r_mbytes_per_sec": 0, 00:08:29.808 "w_mbytes_per_sec": 0 00:08:29.808 }, 00:08:29.808 "claimed": false, 00:08:29.808 "zoned": false, 00:08:29.808 "supported_io_types": { 00:08:29.808 "read": true, 00:08:29.808 "write": true, 00:08:29.808 "unmap": true, 00:08:29.808 "flush": true, 00:08:29.808 "reset": true, 00:08:29.808 "nvme_admin": false, 00:08:29.808 "nvme_io": false, 00:08:29.808 "nvme_io_md": false, 00:08:29.808 "write_zeroes": true, 00:08:29.808 "zcopy": false, 00:08:29.808 "get_zone_info": false, 00:08:29.808 "zone_management": false, 00:08:29.808 "zone_append": false, 00:08:29.808 "compare": false, 00:08:29.808 "compare_and_write": false, 00:08:29.808 "abort": false, 00:08:29.808 "seek_hole": false, 00:08:29.808 "seek_data": false, 00:08:29.808 "copy": false, 00:08:29.808 "nvme_iov_md": false 00:08:29.808 }, 00:08:29.808 "memory_domains": [ 00:08:29.808 { 00:08:29.808 "dma_device_id": "system", 00:08:29.808 "dma_device_type": 1 00:08:29.808 }, 00:08:29.808 { 00:08:29.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.808 "dma_device_type": 2 00:08:29.808 }, 00:08:29.808 { 00:08:29.808 "dma_device_id": "system", 00:08:29.808 "dma_device_type": 1 00:08:29.808 }, 00:08:29.808 { 00:08:29.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.808 "dma_device_type": 2 00:08:29.808 } 00:08:29.808 ], 00:08:29.808 "driver_specific": { 00:08:29.808 "raid": { 00:08:29.808 "uuid": "5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3", 00:08:29.808 "strip_size_kb": 64, 00:08:29.808 "state": "online", 00:08:29.808 "raid_level": "concat", 00:08:29.808 "superblock": true, 00:08:29.808 "num_base_bdevs": 2, 00:08:29.808 "num_base_bdevs_discovered": 2, 00:08:29.808 "num_base_bdevs_operational": 2, 00:08:29.808 "base_bdevs_list": [ 00:08:29.808 { 00:08:29.808 "name": "pt1", 00:08:29.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.808 "is_configured": true, 00:08:29.808 "data_offset": 2048, 00:08:29.808 "data_size": 63488 00:08:29.808 }, 00:08:29.808 { 00:08:29.808 "name": "pt2", 00:08:29.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.808 "is_configured": true, 00:08:29.808 "data_offset": 2048, 00:08:29.808 "data_size": 63488 00:08:29.808 } 00:08:29.808 ] 00:08:29.808 } 00:08:29.808 } 00:08:29.808 }' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:29.808 pt2' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:29.808 [2024-11-17 01:28:38.207427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3 '!=' 5f61f1e3-2a9f-4313-81b4-7e4ac566a5c3 ']' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62056 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62056 ']' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62056 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.808 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62056 00:08:30.068 killing process with pid 62056 00:08:30.068 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.068 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.068 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62056' 00:08:30.068 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62056 00:08:30.068 [2024-11-17 01:28:38.283447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.068 [2024-11-17 01:28:38.283535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.068 [2024-11-17 01:28:38.283585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.068 [2024-11-17 01:28:38.283596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:30.068 01:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62056 00:08:30.068 [2024-11-17 01:28:38.477673] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.450 01:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:31.450 00:08:31.450 real 0m4.407s 00:08:31.450 user 0m6.206s 00:08:31.450 sys 0m0.737s 00:08:31.450 01:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.450 ************************************ 00:08:31.450 END TEST raid_superblock_test 00:08:31.450 ************************************ 00:08:31.450 01:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.450 01:28:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:31.450 01:28:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.450 01:28:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.450 01:28:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.450 ************************************ 00:08:31.450 START TEST raid_read_error_test 00:08:31.450 ************************************ 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2awlKxngL2 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62268 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62268 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62268 ']' 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.450 01:28:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.450 [2024-11-17 01:28:39.719005] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:31.450 [2024-11-17 01:28:39.719125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62268 ] 00:08:31.450 [2024-11-17 01:28:39.886589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.710 [2024-11-17 01:28:39.994609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.970 [2024-11-17 01:28:40.190628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.970 [2024-11-17 01:28:40.190687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 BaseBdev1_malloc 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 true 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 [2024-11-17 01:28:40.602374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.230 [2024-11-17 01:28:40.602430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.230 [2024-11-17 01:28:40.602449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:32.230 [2024-11-17 01:28:40.602459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.230 [2024-11-17 01:28:40.604484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.230 [2024-11-17 01:28:40.604524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.230 BaseBdev1 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 BaseBdev2_malloc 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 true 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 [2024-11-17 01:28:40.668483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.230 [2024-11-17 01:28:40.668540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.230 [2024-11-17 01:28:40.668573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.230 [2024-11-17 01:28:40.668584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.230 [2024-11-17 01:28:40.670567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.230 [2024-11-17 01:28:40.670602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.230 BaseBdev2 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.230 [2024-11-17 01:28:40.680523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.230 [2024-11-17 01:28:40.682269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.230 [2024-11-17 01:28:40.682464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.230 [2024-11-17 01:28:40.682479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:32.230 [2024-11-17 01:28:40.682686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:32.230 [2024-11-17 01:28:40.682865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.230 [2024-11-17 01:28:40.682878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.230 [2024-11-17 01:28:40.683027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.230 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.490 "name": "raid_bdev1", 00:08:32.490 "uuid": "9263ec1e-5400-4e1a-b873-200b40a7ed20", 00:08:32.490 "strip_size_kb": 64, 00:08:32.490 "state": "online", 00:08:32.490 "raid_level": "concat", 00:08:32.490 "superblock": true, 00:08:32.490 "num_base_bdevs": 2, 00:08:32.490 "num_base_bdevs_discovered": 2, 00:08:32.490 "num_base_bdevs_operational": 2, 00:08:32.490 "base_bdevs_list": [ 00:08:32.490 { 00:08:32.490 "name": "BaseBdev1", 00:08:32.490 "uuid": "db672245-c744-572c-862a-02b9f40f1e97", 00:08:32.490 "is_configured": true, 00:08:32.490 "data_offset": 2048, 00:08:32.490 "data_size": 63488 00:08:32.490 }, 00:08:32.490 { 00:08:32.490 "name": "BaseBdev2", 00:08:32.490 "uuid": "7d4396a1-5146-56ab-adf8-d56b7ce455a1", 00:08:32.490 "is_configured": true, 00:08:32.490 "data_offset": 2048, 00:08:32.490 "data_size": 63488 00:08:32.490 } 00:08:32.490 ] 00:08:32.490 }' 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.490 01:28:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.749 01:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:32.749 01:28:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.749 [2024-11-17 01:28:41.148973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.688 "name": "raid_bdev1", 00:08:33.688 "uuid": "9263ec1e-5400-4e1a-b873-200b40a7ed20", 00:08:33.688 "strip_size_kb": 64, 00:08:33.688 "state": "online", 00:08:33.688 "raid_level": "concat", 00:08:33.688 "superblock": true, 00:08:33.688 "num_base_bdevs": 2, 00:08:33.688 "num_base_bdevs_discovered": 2, 00:08:33.688 "num_base_bdevs_operational": 2, 00:08:33.688 "base_bdevs_list": [ 00:08:33.688 { 00:08:33.688 "name": "BaseBdev1", 00:08:33.688 "uuid": "db672245-c744-572c-862a-02b9f40f1e97", 00:08:33.688 "is_configured": true, 00:08:33.688 "data_offset": 2048, 00:08:33.688 "data_size": 63488 00:08:33.688 }, 00:08:33.688 { 00:08:33.688 "name": "BaseBdev2", 00:08:33.688 "uuid": "7d4396a1-5146-56ab-adf8-d56b7ce455a1", 00:08:33.688 "is_configured": true, 00:08:33.688 "data_offset": 2048, 00:08:33.688 "data_size": 63488 00:08:33.688 } 00:08:33.688 ] 00:08:33.688 }' 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.688 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.257 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.257 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.257 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.257 [2024-11-17 01:28:42.455833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.257 [2024-11-17 01:28:42.455869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.257 [2024-11-17 01:28:42.458523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.257 [2024-11-17 01:28:42.458632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.257 [2024-11-17 01:28:42.458676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.257 [2024-11-17 01:28:42.458694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:34.257 { 00:08:34.257 "results": [ 00:08:34.257 { 00:08:34.257 "job": "raid_bdev1", 00:08:34.257 "core_mask": "0x1", 00:08:34.257 "workload": "randrw", 00:08:34.257 "percentage": 50, 00:08:34.257 "status": "finished", 00:08:34.257 "queue_depth": 1, 00:08:34.257 "io_size": 131072, 00:08:34.257 "runtime": 1.307572, 00:08:34.257 "iops": 16454.16084162096, 00:08:34.257 "mibps": 2056.77010520262, 00:08:34.257 "io_failed": 1, 00:08:34.257 "io_timeout": 0, 00:08:34.257 "avg_latency_us": 84.34573462543564, 00:08:34.257 "min_latency_us": 24.593886462882097, 00:08:34.257 "max_latency_us": 1380.8349344978167 00:08:34.257 } 00:08:34.257 ], 00:08:34.257 "core_count": 1 00:08:34.258 } 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62268 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62268 ']' 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62268 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62268 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62268' 00:08:34.258 killing process with pid 62268 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62268 00:08:34.258 [2024-11-17 01:28:42.498646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.258 01:28:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62268 00:08:34.258 [2024-11-17 01:28:42.630162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2awlKxngL2 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:35.637 ************************************ 00:08:35.637 END TEST raid_read_error_test 00:08:35.637 ************************************ 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:35.637 00:08:35.637 real 0m4.122s 00:08:35.637 user 0m4.877s 00:08:35.637 sys 0m0.509s 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.637 01:28:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.637 01:28:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:35.637 01:28:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.637 01:28:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.637 01:28:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.637 ************************************ 00:08:35.637 START TEST raid_write_error_test 00:08:35.637 ************************************ 00:08:35.637 01:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:35.637 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:35.637 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CKSnac1Z6L 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62408 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62408 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62408 ']' 00:08:35.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.638 01:28:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.638 [2024-11-17 01:28:43.903385] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:35.638 [2024-11-17 01:28:43.903511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62408 ] 00:08:35.638 [2024-11-17 01:28:44.074976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.897 [2024-11-17 01:28:44.194454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.160 [2024-11-17 01:28:44.398148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.160 [2024-11-17 01:28:44.398212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.428 BaseBdev1_malloc 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.428 true 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.428 [2024-11-17 01:28:44.789516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.428 [2024-11-17 01:28:44.789572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.428 [2024-11-17 01:28:44.789591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.428 [2024-11-17 01:28:44.789601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.428 [2024-11-17 01:28:44.791683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.428 [2024-11-17 01:28:44.791725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.428 BaseBdev1 00:08:36.428 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.429 BaseBdev2_malloc 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.429 true 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.429 [2024-11-17 01:28:44.855755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.429 [2024-11-17 01:28:44.855813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.429 [2024-11-17 01:28:44.855829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.429 [2024-11-17 01:28:44.855839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.429 [2024-11-17 01:28:44.857860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.429 [2024-11-17 01:28:44.857894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.429 BaseBdev2 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.429 [2024-11-17 01:28:44.867802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.429 [2024-11-17 01:28:44.869585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.429 [2024-11-17 01:28:44.869780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:36.429 [2024-11-17 01:28:44.869796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:36.429 [2024-11-17 01:28:44.870001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:36.429 [2024-11-17 01:28:44.870167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:36.429 [2024-11-17 01:28:44.870179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:36.429 [2024-11-17 01:28:44.870316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.429 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.689 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.689 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.689 "name": "raid_bdev1", 00:08:36.689 "uuid": "a8520310-7452-422c-a0e8-1578c5c66e82", 00:08:36.689 "strip_size_kb": 64, 00:08:36.689 "state": "online", 00:08:36.689 "raid_level": "concat", 00:08:36.689 "superblock": true, 00:08:36.689 "num_base_bdevs": 2, 00:08:36.689 "num_base_bdevs_discovered": 2, 00:08:36.689 "num_base_bdevs_operational": 2, 00:08:36.689 "base_bdevs_list": [ 00:08:36.689 { 00:08:36.689 "name": "BaseBdev1", 00:08:36.689 "uuid": "53ef71ef-782c-5185-b264-82e83890f2e7", 00:08:36.689 "is_configured": true, 00:08:36.689 "data_offset": 2048, 00:08:36.689 "data_size": 63488 00:08:36.689 }, 00:08:36.689 { 00:08:36.689 "name": "BaseBdev2", 00:08:36.689 "uuid": "e2d7970b-61f3-5213-b1a3-5fc68842e288", 00:08:36.689 "is_configured": true, 00:08:36.689 "data_offset": 2048, 00:08:36.689 "data_size": 63488 00:08:36.689 } 00:08:36.689 ] 00:08:36.689 }' 00:08:36.689 01:28:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.689 01:28:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.948 01:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.948 01:28:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.948 [2024-11-17 01:28:45.364412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.885 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.144 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.144 "name": "raid_bdev1", 00:08:38.144 "uuid": "a8520310-7452-422c-a0e8-1578c5c66e82", 00:08:38.144 "strip_size_kb": 64, 00:08:38.144 "state": "online", 00:08:38.144 "raid_level": "concat", 00:08:38.144 "superblock": true, 00:08:38.144 "num_base_bdevs": 2, 00:08:38.145 "num_base_bdevs_discovered": 2, 00:08:38.145 "num_base_bdevs_operational": 2, 00:08:38.145 "base_bdevs_list": [ 00:08:38.145 { 00:08:38.145 "name": "BaseBdev1", 00:08:38.145 "uuid": "53ef71ef-782c-5185-b264-82e83890f2e7", 00:08:38.145 "is_configured": true, 00:08:38.145 "data_offset": 2048, 00:08:38.145 "data_size": 63488 00:08:38.145 }, 00:08:38.145 { 00:08:38.145 "name": "BaseBdev2", 00:08:38.145 "uuid": "e2d7970b-61f3-5213-b1a3-5fc68842e288", 00:08:38.145 "is_configured": true, 00:08:38.145 "data_offset": 2048, 00:08:38.145 "data_size": 63488 00:08:38.145 } 00:08:38.145 ] 00:08:38.145 }' 00:08:38.145 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.145 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.404 [2024-11-17 01:28:46.724032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.404 [2024-11-17 01:28:46.724129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.404 [2024-11-17 01:28:46.726665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.404 [2024-11-17 01:28:46.726746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.404 [2024-11-17 01:28:46.726804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.404 [2024-11-17 01:28:46.726857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:38.404 { 00:08:38.404 "results": [ 00:08:38.404 { 00:08:38.404 "job": "raid_bdev1", 00:08:38.404 "core_mask": "0x1", 00:08:38.404 "workload": "randrw", 00:08:38.404 "percentage": 50, 00:08:38.404 "status": "finished", 00:08:38.404 "queue_depth": 1, 00:08:38.404 "io_size": 131072, 00:08:38.404 "runtime": 1.360535, 00:08:38.404 "iops": 17068.285637635196, 00:08:38.404 "mibps": 2133.5357047043994, 00:08:38.404 "io_failed": 1, 00:08:38.404 "io_timeout": 0, 00:08:38.404 "avg_latency_us": 81.26640480460286, 00:08:38.404 "min_latency_us": 24.705676855895195, 00:08:38.404 "max_latency_us": 1395.1441048034935 00:08:38.404 } 00:08:38.404 ], 00:08:38.404 "core_count": 1 00:08:38.404 } 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62408 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62408 ']' 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62408 00:08:38.404 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:38.405 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.405 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62408 00:08:38.405 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.405 killing process with pid 62408 00:08:38.405 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.405 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62408' 00:08:38.405 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62408 00:08:38.405 [2024-11-17 01:28:46.773689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.405 01:28:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62408 00:08:38.663 [2024-11-17 01:28:46.910045] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CKSnac1Z6L 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:39.600 00:08:39.600 real 0m4.239s 00:08:39.600 user 0m5.039s 00:08:39.600 sys 0m0.529s 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.600 ************************************ 00:08:39.600 END TEST raid_write_error_test 00:08:39.600 ************************************ 00:08:39.600 01:28:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.859 01:28:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:39.859 01:28:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:39.859 01:28:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.859 01:28:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.859 01:28:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.859 ************************************ 00:08:39.859 START TEST raid_state_function_test 00:08:39.859 ************************************ 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62546 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62546' 00:08:39.859 Process raid pid: 62546 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62546 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62546 ']' 00:08:39.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.859 01:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.859 [2024-11-17 01:28:48.203645] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:39.859 [2024-11-17 01:28:48.203864] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.118 [2024-11-17 01:28:48.388307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.118 [2024-11-17 01:28:48.505768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.377 [2024-11-17 01:28:48.710299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.377 [2024-11-17 01:28:48.710345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.637 [2024-11-17 01:28:49.031120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.637 [2024-11-17 01:28:49.031171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.637 [2024-11-17 01:28:49.031182] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.637 [2024-11-17 01:28:49.031191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.637 "name": "Existed_Raid", 00:08:40.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.637 "strip_size_kb": 0, 00:08:40.637 "state": "configuring", 00:08:40.637 "raid_level": "raid1", 00:08:40.637 "superblock": false, 00:08:40.637 "num_base_bdevs": 2, 00:08:40.637 "num_base_bdevs_discovered": 0, 00:08:40.637 "num_base_bdevs_operational": 2, 00:08:40.637 "base_bdevs_list": [ 00:08:40.637 { 00:08:40.637 "name": "BaseBdev1", 00:08:40.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.637 "is_configured": false, 00:08:40.637 "data_offset": 0, 00:08:40.637 "data_size": 0 00:08:40.637 }, 00:08:40.637 { 00:08:40.637 "name": "BaseBdev2", 00:08:40.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.637 "is_configured": false, 00:08:40.637 "data_offset": 0, 00:08:40.637 "data_size": 0 00:08:40.637 } 00:08:40.637 ] 00:08:40.637 }' 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.637 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.204 [2024-11-17 01:28:49.450356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.204 [2024-11-17 01:28:49.450447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.204 [2024-11-17 01:28:49.462315] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.204 [2024-11-17 01:28:49.462395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.204 [2024-11-17 01:28:49.462425] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.204 [2024-11-17 01:28:49.462450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.204 [2024-11-17 01:28:49.513889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.204 BaseBdev1 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.204 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.205 [ 00:08:41.205 { 00:08:41.205 "name": "BaseBdev1", 00:08:41.205 "aliases": [ 00:08:41.205 "731e0802-7445-48e8-8a78-8a59a591e4ff" 00:08:41.205 ], 00:08:41.205 "product_name": "Malloc disk", 00:08:41.205 "block_size": 512, 00:08:41.205 "num_blocks": 65536, 00:08:41.205 "uuid": "731e0802-7445-48e8-8a78-8a59a591e4ff", 00:08:41.205 "assigned_rate_limits": { 00:08:41.205 "rw_ios_per_sec": 0, 00:08:41.205 "rw_mbytes_per_sec": 0, 00:08:41.205 "r_mbytes_per_sec": 0, 00:08:41.205 "w_mbytes_per_sec": 0 00:08:41.205 }, 00:08:41.205 "claimed": true, 00:08:41.205 "claim_type": "exclusive_write", 00:08:41.205 "zoned": false, 00:08:41.205 "supported_io_types": { 00:08:41.205 "read": true, 00:08:41.205 "write": true, 00:08:41.205 "unmap": true, 00:08:41.205 "flush": true, 00:08:41.205 "reset": true, 00:08:41.205 "nvme_admin": false, 00:08:41.205 "nvme_io": false, 00:08:41.205 "nvme_io_md": false, 00:08:41.205 "write_zeroes": true, 00:08:41.205 "zcopy": true, 00:08:41.205 "get_zone_info": false, 00:08:41.205 "zone_management": false, 00:08:41.205 "zone_append": false, 00:08:41.205 "compare": false, 00:08:41.205 "compare_and_write": false, 00:08:41.205 "abort": true, 00:08:41.205 "seek_hole": false, 00:08:41.205 "seek_data": false, 00:08:41.205 "copy": true, 00:08:41.205 "nvme_iov_md": false 00:08:41.205 }, 00:08:41.205 "memory_domains": [ 00:08:41.205 { 00:08:41.205 "dma_device_id": "system", 00:08:41.205 "dma_device_type": 1 00:08:41.205 }, 00:08:41.205 { 00:08:41.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.205 "dma_device_type": 2 00:08:41.205 } 00:08:41.205 ], 00:08:41.205 "driver_specific": {} 00:08:41.205 } 00:08:41.205 ] 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.205 "name": "Existed_Raid", 00:08:41.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.205 "strip_size_kb": 0, 00:08:41.205 "state": "configuring", 00:08:41.205 "raid_level": "raid1", 00:08:41.205 "superblock": false, 00:08:41.205 "num_base_bdevs": 2, 00:08:41.205 "num_base_bdevs_discovered": 1, 00:08:41.205 "num_base_bdevs_operational": 2, 00:08:41.205 "base_bdevs_list": [ 00:08:41.205 { 00:08:41.205 "name": "BaseBdev1", 00:08:41.205 "uuid": "731e0802-7445-48e8-8a78-8a59a591e4ff", 00:08:41.205 "is_configured": true, 00:08:41.205 "data_offset": 0, 00:08:41.205 "data_size": 65536 00:08:41.205 }, 00:08:41.205 { 00:08:41.205 "name": "BaseBdev2", 00:08:41.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.205 "is_configured": false, 00:08:41.205 "data_offset": 0, 00:08:41.205 "data_size": 0 00:08:41.205 } 00:08:41.205 ] 00:08:41.205 }' 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.205 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 01:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.773 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.773 01:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 [2024-11-17 01:28:50.001093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.773 [2024-11-17 01:28:50.001210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 [2024-11-17 01:28:50.013105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.773 [2024-11-17 01:28:50.014964] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.773 [2024-11-17 01:28:50.015050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.773 "name": "Existed_Raid", 00:08:41.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.773 "strip_size_kb": 0, 00:08:41.773 "state": "configuring", 00:08:41.773 "raid_level": "raid1", 00:08:41.773 "superblock": false, 00:08:41.773 "num_base_bdevs": 2, 00:08:41.773 "num_base_bdevs_discovered": 1, 00:08:41.773 "num_base_bdevs_operational": 2, 00:08:41.773 "base_bdevs_list": [ 00:08:41.773 { 00:08:41.773 "name": "BaseBdev1", 00:08:41.773 "uuid": "731e0802-7445-48e8-8a78-8a59a591e4ff", 00:08:41.773 "is_configured": true, 00:08:41.773 "data_offset": 0, 00:08:41.773 "data_size": 65536 00:08:41.773 }, 00:08:41.773 { 00:08:41.773 "name": "BaseBdev2", 00:08:41.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.773 "is_configured": false, 00:08:41.773 "data_offset": 0, 00:08:41.773 "data_size": 0 00:08:41.773 } 00:08:41.773 ] 00:08:41.773 }' 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.773 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.032 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.032 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.032 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.292 [2024-11-17 01:28:50.524146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.292 [2024-11-17 01:28:50.524204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.292 [2024-11-17 01:28:50.524213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:42.292 [2024-11-17 01:28:50.524468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:42.292 [2024-11-17 01:28:50.524622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.292 [2024-11-17 01:28:50.524644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:42.292 [2024-11-17 01:28:50.524945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.292 BaseBdev2 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.292 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.292 [ 00:08:42.292 { 00:08:42.292 "name": "BaseBdev2", 00:08:42.292 "aliases": [ 00:08:42.292 "707bd8c7-4337-4169-b8b8-a625c17af224" 00:08:42.292 ], 00:08:42.292 "product_name": "Malloc disk", 00:08:42.292 "block_size": 512, 00:08:42.292 "num_blocks": 65536, 00:08:42.292 "uuid": "707bd8c7-4337-4169-b8b8-a625c17af224", 00:08:42.292 "assigned_rate_limits": { 00:08:42.292 "rw_ios_per_sec": 0, 00:08:42.292 "rw_mbytes_per_sec": 0, 00:08:42.292 "r_mbytes_per_sec": 0, 00:08:42.293 "w_mbytes_per_sec": 0 00:08:42.293 }, 00:08:42.293 "claimed": true, 00:08:42.293 "claim_type": "exclusive_write", 00:08:42.293 "zoned": false, 00:08:42.293 "supported_io_types": { 00:08:42.293 "read": true, 00:08:42.293 "write": true, 00:08:42.293 "unmap": true, 00:08:42.293 "flush": true, 00:08:42.293 "reset": true, 00:08:42.293 "nvme_admin": false, 00:08:42.293 "nvme_io": false, 00:08:42.293 "nvme_io_md": false, 00:08:42.293 "write_zeroes": true, 00:08:42.293 "zcopy": true, 00:08:42.293 "get_zone_info": false, 00:08:42.293 "zone_management": false, 00:08:42.293 "zone_append": false, 00:08:42.293 "compare": false, 00:08:42.293 "compare_and_write": false, 00:08:42.293 "abort": true, 00:08:42.293 "seek_hole": false, 00:08:42.293 "seek_data": false, 00:08:42.293 "copy": true, 00:08:42.293 "nvme_iov_md": false 00:08:42.293 }, 00:08:42.293 "memory_domains": [ 00:08:42.293 { 00:08:42.293 "dma_device_id": "system", 00:08:42.293 "dma_device_type": 1 00:08:42.293 }, 00:08:42.293 { 00:08:42.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.293 "dma_device_type": 2 00:08:42.293 } 00:08:42.293 ], 00:08:42.293 "driver_specific": {} 00:08:42.293 } 00:08:42.293 ] 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.293 "name": "Existed_Raid", 00:08:42.293 "uuid": "8caa2fa3-6646-48b8-ba35-860e2d672c9c", 00:08:42.293 "strip_size_kb": 0, 00:08:42.293 "state": "online", 00:08:42.293 "raid_level": "raid1", 00:08:42.293 "superblock": false, 00:08:42.293 "num_base_bdevs": 2, 00:08:42.293 "num_base_bdevs_discovered": 2, 00:08:42.293 "num_base_bdevs_operational": 2, 00:08:42.293 "base_bdevs_list": [ 00:08:42.293 { 00:08:42.293 "name": "BaseBdev1", 00:08:42.293 "uuid": "731e0802-7445-48e8-8a78-8a59a591e4ff", 00:08:42.293 "is_configured": true, 00:08:42.293 "data_offset": 0, 00:08:42.293 "data_size": 65536 00:08:42.293 }, 00:08:42.293 { 00:08:42.293 "name": "BaseBdev2", 00:08:42.293 "uuid": "707bd8c7-4337-4169-b8b8-a625c17af224", 00:08:42.293 "is_configured": true, 00:08:42.293 "data_offset": 0, 00:08:42.293 "data_size": 65536 00:08:42.293 } 00:08:42.293 ] 00:08:42.293 }' 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.293 01:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.862 [2024-11-17 01:28:51.027651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.862 "name": "Existed_Raid", 00:08:42.862 "aliases": [ 00:08:42.862 "8caa2fa3-6646-48b8-ba35-860e2d672c9c" 00:08:42.862 ], 00:08:42.862 "product_name": "Raid Volume", 00:08:42.862 "block_size": 512, 00:08:42.862 "num_blocks": 65536, 00:08:42.862 "uuid": "8caa2fa3-6646-48b8-ba35-860e2d672c9c", 00:08:42.862 "assigned_rate_limits": { 00:08:42.862 "rw_ios_per_sec": 0, 00:08:42.862 "rw_mbytes_per_sec": 0, 00:08:42.862 "r_mbytes_per_sec": 0, 00:08:42.862 "w_mbytes_per_sec": 0 00:08:42.862 }, 00:08:42.862 "claimed": false, 00:08:42.862 "zoned": false, 00:08:42.862 "supported_io_types": { 00:08:42.862 "read": true, 00:08:42.862 "write": true, 00:08:42.862 "unmap": false, 00:08:42.862 "flush": false, 00:08:42.862 "reset": true, 00:08:42.862 "nvme_admin": false, 00:08:42.862 "nvme_io": false, 00:08:42.862 "nvme_io_md": false, 00:08:42.862 "write_zeroes": true, 00:08:42.862 "zcopy": false, 00:08:42.862 "get_zone_info": false, 00:08:42.862 "zone_management": false, 00:08:42.862 "zone_append": false, 00:08:42.862 "compare": false, 00:08:42.862 "compare_and_write": false, 00:08:42.862 "abort": false, 00:08:42.862 "seek_hole": false, 00:08:42.862 "seek_data": false, 00:08:42.862 "copy": false, 00:08:42.862 "nvme_iov_md": false 00:08:42.862 }, 00:08:42.862 "memory_domains": [ 00:08:42.862 { 00:08:42.862 "dma_device_id": "system", 00:08:42.862 "dma_device_type": 1 00:08:42.862 }, 00:08:42.862 { 00:08:42.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.862 "dma_device_type": 2 00:08:42.862 }, 00:08:42.862 { 00:08:42.862 "dma_device_id": "system", 00:08:42.862 "dma_device_type": 1 00:08:42.862 }, 00:08:42.862 { 00:08:42.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.862 "dma_device_type": 2 00:08:42.862 } 00:08:42.862 ], 00:08:42.862 "driver_specific": { 00:08:42.862 "raid": { 00:08:42.862 "uuid": "8caa2fa3-6646-48b8-ba35-860e2d672c9c", 00:08:42.862 "strip_size_kb": 0, 00:08:42.862 "state": "online", 00:08:42.862 "raid_level": "raid1", 00:08:42.862 "superblock": false, 00:08:42.862 "num_base_bdevs": 2, 00:08:42.862 "num_base_bdevs_discovered": 2, 00:08:42.862 "num_base_bdevs_operational": 2, 00:08:42.862 "base_bdevs_list": [ 00:08:42.862 { 00:08:42.862 "name": "BaseBdev1", 00:08:42.862 "uuid": "731e0802-7445-48e8-8a78-8a59a591e4ff", 00:08:42.862 "is_configured": true, 00:08:42.862 "data_offset": 0, 00:08:42.862 "data_size": 65536 00:08:42.862 }, 00:08:42.862 { 00:08:42.862 "name": "BaseBdev2", 00:08:42.862 "uuid": "707bd8c7-4337-4169-b8b8-a625c17af224", 00:08:42.862 "is_configured": true, 00:08:42.862 "data_offset": 0, 00:08:42.862 "data_size": 65536 00:08:42.862 } 00:08:42.862 ] 00:08:42.862 } 00:08:42.862 } 00:08:42.862 }' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:42.862 BaseBdev2' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.862 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.863 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.863 [2024-11-17 01:28:51.227053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.121 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.122 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.122 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.122 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.122 "name": "Existed_Raid", 00:08:43.122 "uuid": "8caa2fa3-6646-48b8-ba35-860e2d672c9c", 00:08:43.122 "strip_size_kb": 0, 00:08:43.122 "state": "online", 00:08:43.122 "raid_level": "raid1", 00:08:43.122 "superblock": false, 00:08:43.122 "num_base_bdevs": 2, 00:08:43.122 "num_base_bdevs_discovered": 1, 00:08:43.122 "num_base_bdevs_operational": 1, 00:08:43.122 "base_bdevs_list": [ 00:08:43.122 { 00:08:43.122 "name": null, 00:08:43.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.122 "is_configured": false, 00:08:43.122 "data_offset": 0, 00:08:43.122 "data_size": 65536 00:08:43.122 }, 00:08:43.122 { 00:08:43.122 "name": "BaseBdev2", 00:08:43.122 "uuid": "707bd8c7-4337-4169-b8b8-a625c17af224", 00:08:43.122 "is_configured": true, 00:08:43.122 "data_offset": 0, 00:08:43.122 "data_size": 65536 00:08:43.122 } 00:08:43.122 ] 00:08:43.122 }' 00:08:43.122 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.122 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.381 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.381 [2024-11-17 01:28:51.756713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.381 [2024-11-17 01:28:51.756818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.641 [2024-11-17 01:28:51.847585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.641 [2024-11-17 01:28:51.847662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.641 [2024-11-17 01:28:51.847674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62546 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62546 ']' 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62546 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62546 00:08:43.641 killing process with pid 62546 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62546' 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62546 00:08:43.641 [2024-11-17 01:28:51.941203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.641 01:28:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62546 00:08:43.641 [2024-11-17 01:28:51.958011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.579 01:28:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.579 00:08:44.579 real 0m4.894s 00:08:44.579 user 0m7.068s 00:08:44.579 sys 0m0.799s 00:08:44.579 01:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.579 01:28:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 ************************************ 00:08:44.579 END TEST raid_state_function_test 00:08:44.579 ************************************ 00:08:44.839 01:28:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:44.839 01:28:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:44.839 01:28:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.839 01:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.839 ************************************ 00:08:44.839 START TEST raid_state_function_test_sb 00:08:44.839 ************************************ 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62799 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62799' 00:08:44.839 Process raid pid: 62799 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62799 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62799 ']' 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.839 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.839 [2024-11-17 01:28:53.164924] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:44.839 [2024-11-17 01:28:53.165050] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.099 [2024-11-17 01:28:53.337000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.099 [2024-11-17 01:28:53.441975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.358 [2024-11-17 01:28:53.634156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.358 [2024-11-17 01:28:53.634199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.617 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.618 [2024-11-17 01:28:53.985231] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.618 [2024-11-17 01:28:53.985279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.618 [2024-11-17 01:28:53.985289] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.618 [2024-11-17 01:28:53.985298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.618 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.618 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.618 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.618 "name": "Existed_Raid", 00:08:45.618 "uuid": "8abe5e1e-9f25-45e4-b943-ca95533d9b16", 00:08:45.618 "strip_size_kb": 0, 00:08:45.618 "state": "configuring", 00:08:45.618 "raid_level": "raid1", 00:08:45.618 "superblock": true, 00:08:45.618 "num_base_bdevs": 2, 00:08:45.618 "num_base_bdevs_discovered": 0, 00:08:45.618 "num_base_bdevs_operational": 2, 00:08:45.618 "base_bdevs_list": [ 00:08:45.618 { 00:08:45.618 "name": "BaseBdev1", 00:08:45.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.618 "is_configured": false, 00:08:45.618 "data_offset": 0, 00:08:45.618 "data_size": 0 00:08:45.618 }, 00:08:45.618 { 00:08:45.618 "name": "BaseBdev2", 00:08:45.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.618 "is_configured": false, 00:08:45.618 "data_offset": 0, 00:08:45.618 "data_size": 0 00:08:45.618 } 00:08:45.618 ] 00:08:45.618 }' 00:08:45.618 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.618 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.187 [2024-11-17 01:28:54.356523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.187 [2024-11-17 01:28:54.356559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.187 [2024-11-17 01:28:54.364514] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.187 [2024-11-17 01:28:54.364552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.187 [2024-11-17 01:28:54.364561] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.187 [2024-11-17 01:28:54.364573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.187 [2024-11-17 01:28:54.407793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.187 BaseBdev1 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.187 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.188 [ 00:08:46.188 { 00:08:46.188 "name": "BaseBdev1", 00:08:46.188 "aliases": [ 00:08:46.188 "1052eeba-fded-43ab-94cb-b615c317f16e" 00:08:46.188 ], 00:08:46.188 "product_name": "Malloc disk", 00:08:46.188 "block_size": 512, 00:08:46.188 "num_blocks": 65536, 00:08:46.188 "uuid": "1052eeba-fded-43ab-94cb-b615c317f16e", 00:08:46.188 "assigned_rate_limits": { 00:08:46.188 "rw_ios_per_sec": 0, 00:08:46.188 "rw_mbytes_per_sec": 0, 00:08:46.188 "r_mbytes_per_sec": 0, 00:08:46.188 "w_mbytes_per_sec": 0 00:08:46.188 }, 00:08:46.188 "claimed": true, 00:08:46.188 "claim_type": "exclusive_write", 00:08:46.188 "zoned": false, 00:08:46.188 "supported_io_types": { 00:08:46.188 "read": true, 00:08:46.188 "write": true, 00:08:46.188 "unmap": true, 00:08:46.188 "flush": true, 00:08:46.188 "reset": true, 00:08:46.188 "nvme_admin": false, 00:08:46.188 "nvme_io": false, 00:08:46.188 "nvme_io_md": false, 00:08:46.188 "write_zeroes": true, 00:08:46.188 "zcopy": true, 00:08:46.188 "get_zone_info": false, 00:08:46.188 "zone_management": false, 00:08:46.188 "zone_append": false, 00:08:46.188 "compare": false, 00:08:46.188 "compare_and_write": false, 00:08:46.188 "abort": true, 00:08:46.188 "seek_hole": false, 00:08:46.188 "seek_data": false, 00:08:46.188 "copy": true, 00:08:46.188 "nvme_iov_md": false 00:08:46.188 }, 00:08:46.188 "memory_domains": [ 00:08:46.188 { 00:08:46.188 "dma_device_id": "system", 00:08:46.188 "dma_device_type": 1 00:08:46.188 }, 00:08:46.188 { 00:08:46.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.188 "dma_device_type": 2 00:08:46.188 } 00:08:46.188 ], 00:08:46.188 "driver_specific": {} 00:08:46.188 } 00:08:46.188 ] 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.188 "name": "Existed_Raid", 00:08:46.188 "uuid": "e249f296-4639-4484-8483-36424b0e06be", 00:08:46.188 "strip_size_kb": 0, 00:08:46.188 "state": "configuring", 00:08:46.188 "raid_level": "raid1", 00:08:46.188 "superblock": true, 00:08:46.188 "num_base_bdevs": 2, 00:08:46.188 "num_base_bdevs_discovered": 1, 00:08:46.188 "num_base_bdevs_operational": 2, 00:08:46.188 "base_bdevs_list": [ 00:08:46.188 { 00:08:46.188 "name": "BaseBdev1", 00:08:46.188 "uuid": "1052eeba-fded-43ab-94cb-b615c317f16e", 00:08:46.188 "is_configured": true, 00:08:46.188 "data_offset": 2048, 00:08:46.188 "data_size": 63488 00:08:46.188 }, 00:08:46.188 { 00:08:46.188 "name": "BaseBdev2", 00:08:46.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.188 "is_configured": false, 00:08:46.188 "data_offset": 0, 00:08:46.188 "data_size": 0 00:08:46.188 } 00:08:46.188 ] 00:08:46.188 }' 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.188 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.448 [2024-11-17 01:28:54.863033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.448 [2024-11-17 01:28:54.863092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.448 [2024-11-17 01:28:54.875088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.448 [2024-11-17 01:28:54.877129] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.448 [2024-11-17 01:28:54.877169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.448 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.449 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.449 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.709 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.709 "name": "Existed_Raid", 00:08:46.709 "uuid": "0ba8e757-6e3e-4884-9597-491dbdef4e4a", 00:08:46.709 "strip_size_kb": 0, 00:08:46.709 "state": "configuring", 00:08:46.709 "raid_level": "raid1", 00:08:46.709 "superblock": true, 00:08:46.709 "num_base_bdevs": 2, 00:08:46.709 "num_base_bdevs_discovered": 1, 00:08:46.709 "num_base_bdevs_operational": 2, 00:08:46.709 "base_bdevs_list": [ 00:08:46.709 { 00:08:46.709 "name": "BaseBdev1", 00:08:46.709 "uuid": "1052eeba-fded-43ab-94cb-b615c317f16e", 00:08:46.709 "is_configured": true, 00:08:46.709 "data_offset": 2048, 00:08:46.709 "data_size": 63488 00:08:46.709 }, 00:08:46.709 { 00:08:46.709 "name": "BaseBdev2", 00:08:46.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.709 "is_configured": false, 00:08:46.709 "data_offset": 0, 00:08:46.709 "data_size": 0 00:08:46.709 } 00:08:46.709 ] 00:08:46.709 }' 00:08:46.709 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.709 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 [2024-11-17 01:28:55.339475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.969 [2024-11-17 01:28:55.339779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:46.969 [2024-11-17 01:28:55.339795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.969 [2024-11-17 01:28:55.340049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:46.969 BaseBdev2 00:08:46.969 [2024-11-17 01:28:55.340206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:46.969 [2024-11-17 01:28:55.340222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:46.969 [2024-11-17 01:28:55.340375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 [ 00:08:46.969 { 00:08:46.969 "name": "BaseBdev2", 00:08:46.969 "aliases": [ 00:08:46.969 "44b8f95d-5e2b-4107-831d-25cdd43b763a" 00:08:46.969 ], 00:08:46.969 "product_name": "Malloc disk", 00:08:46.969 "block_size": 512, 00:08:46.969 "num_blocks": 65536, 00:08:46.969 "uuid": "44b8f95d-5e2b-4107-831d-25cdd43b763a", 00:08:46.969 "assigned_rate_limits": { 00:08:46.969 "rw_ios_per_sec": 0, 00:08:46.969 "rw_mbytes_per_sec": 0, 00:08:46.969 "r_mbytes_per_sec": 0, 00:08:46.969 "w_mbytes_per_sec": 0 00:08:46.969 }, 00:08:46.969 "claimed": true, 00:08:46.969 "claim_type": "exclusive_write", 00:08:46.969 "zoned": false, 00:08:46.969 "supported_io_types": { 00:08:46.969 "read": true, 00:08:46.969 "write": true, 00:08:46.969 "unmap": true, 00:08:46.969 "flush": true, 00:08:46.969 "reset": true, 00:08:46.969 "nvme_admin": false, 00:08:46.969 "nvme_io": false, 00:08:46.969 "nvme_io_md": false, 00:08:46.969 "write_zeroes": true, 00:08:46.969 "zcopy": true, 00:08:46.969 "get_zone_info": false, 00:08:46.969 "zone_management": false, 00:08:46.969 "zone_append": false, 00:08:46.969 "compare": false, 00:08:46.969 "compare_and_write": false, 00:08:46.969 "abort": true, 00:08:46.969 "seek_hole": false, 00:08:46.969 "seek_data": false, 00:08:46.969 "copy": true, 00:08:46.969 "nvme_iov_md": false 00:08:46.969 }, 00:08:46.969 "memory_domains": [ 00:08:46.969 { 00:08:46.969 "dma_device_id": "system", 00:08:46.969 "dma_device_type": 1 00:08:46.969 }, 00:08:46.969 { 00:08:46.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.969 "dma_device_type": 2 00:08:46.969 } 00:08:46.969 ], 00:08:46.969 "driver_specific": {} 00:08:46.969 } 00:08:46.969 ] 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.969 "name": "Existed_Raid", 00:08:46.969 "uuid": "0ba8e757-6e3e-4884-9597-491dbdef4e4a", 00:08:46.969 "strip_size_kb": 0, 00:08:46.969 "state": "online", 00:08:46.969 "raid_level": "raid1", 00:08:46.969 "superblock": true, 00:08:46.969 "num_base_bdevs": 2, 00:08:46.969 "num_base_bdevs_discovered": 2, 00:08:46.969 "num_base_bdevs_operational": 2, 00:08:46.969 "base_bdevs_list": [ 00:08:46.969 { 00:08:46.969 "name": "BaseBdev1", 00:08:46.969 "uuid": "1052eeba-fded-43ab-94cb-b615c317f16e", 00:08:46.969 "is_configured": true, 00:08:46.969 "data_offset": 2048, 00:08:46.969 "data_size": 63488 00:08:46.969 }, 00:08:46.969 { 00:08:46.969 "name": "BaseBdev2", 00:08:46.969 "uuid": "44b8f95d-5e2b-4107-831d-25cdd43b763a", 00:08:46.969 "is_configured": true, 00:08:46.969 "data_offset": 2048, 00:08:46.969 "data_size": 63488 00:08:46.969 } 00:08:46.969 ] 00:08:46.969 }' 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.969 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.539 [2024-11-17 01:28:55.799053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.539 "name": "Existed_Raid", 00:08:47.539 "aliases": [ 00:08:47.539 "0ba8e757-6e3e-4884-9597-491dbdef4e4a" 00:08:47.539 ], 00:08:47.539 "product_name": "Raid Volume", 00:08:47.539 "block_size": 512, 00:08:47.539 "num_blocks": 63488, 00:08:47.539 "uuid": "0ba8e757-6e3e-4884-9597-491dbdef4e4a", 00:08:47.539 "assigned_rate_limits": { 00:08:47.539 "rw_ios_per_sec": 0, 00:08:47.539 "rw_mbytes_per_sec": 0, 00:08:47.539 "r_mbytes_per_sec": 0, 00:08:47.539 "w_mbytes_per_sec": 0 00:08:47.539 }, 00:08:47.539 "claimed": false, 00:08:47.539 "zoned": false, 00:08:47.539 "supported_io_types": { 00:08:47.539 "read": true, 00:08:47.539 "write": true, 00:08:47.539 "unmap": false, 00:08:47.539 "flush": false, 00:08:47.539 "reset": true, 00:08:47.539 "nvme_admin": false, 00:08:47.539 "nvme_io": false, 00:08:47.539 "nvme_io_md": false, 00:08:47.539 "write_zeroes": true, 00:08:47.539 "zcopy": false, 00:08:47.539 "get_zone_info": false, 00:08:47.539 "zone_management": false, 00:08:47.539 "zone_append": false, 00:08:47.539 "compare": false, 00:08:47.539 "compare_and_write": false, 00:08:47.539 "abort": false, 00:08:47.539 "seek_hole": false, 00:08:47.539 "seek_data": false, 00:08:47.539 "copy": false, 00:08:47.539 "nvme_iov_md": false 00:08:47.539 }, 00:08:47.539 "memory_domains": [ 00:08:47.539 { 00:08:47.539 "dma_device_id": "system", 00:08:47.539 "dma_device_type": 1 00:08:47.539 }, 00:08:47.539 { 00:08:47.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.539 "dma_device_type": 2 00:08:47.539 }, 00:08:47.539 { 00:08:47.539 "dma_device_id": "system", 00:08:47.539 "dma_device_type": 1 00:08:47.539 }, 00:08:47.539 { 00:08:47.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.539 "dma_device_type": 2 00:08:47.539 } 00:08:47.539 ], 00:08:47.539 "driver_specific": { 00:08:47.539 "raid": { 00:08:47.539 "uuid": "0ba8e757-6e3e-4884-9597-491dbdef4e4a", 00:08:47.539 "strip_size_kb": 0, 00:08:47.539 "state": "online", 00:08:47.539 "raid_level": "raid1", 00:08:47.539 "superblock": true, 00:08:47.539 "num_base_bdevs": 2, 00:08:47.539 "num_base_bdevs_discovered": 2, 00:08:47.539 "num_base_bdevs_operational": 2, 00:08:47.539 "base_bdevs_list": [ 00:08:47.539 { 00:08:47.539 "name": "BaseBdev1", 00:08:47.539 "uuid": "1052eeba-fded-43ab-94cb-b615c317f16e", 00:08:47.539 "is_configured": true, 00:08:47.539 "data_offset": 2048, 00:08:47.539 "data_size": 63488 00:08:47.539 }, 00:08:47.539 { 00:08:47.539 "name": "BaseBdev2", 00:08:47.539 "uuid": "44b8f95d-5e2b-4107-831d-25cdd43b763a", 00:08:47.539 "is_configured": true, 00:08:47.539 "data_offset": 2048, 00:08:47.539 "data_size": 63488 00:08:47.539 } 00:08:47.539 ] 00:08:47.539 } 00:08:47.539 } 00:08:47.539 }' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.539 BaseBdev2' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.539 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.799 [2024-11-17 01:28:56.006462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.799 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.799 "name": "Existed_Raid", 00:08:47.799 "uuid": "0ba8e757-6e3e-4884-9597-491dbdef4e4a", 00:08:47.800 "strip_size_kb": 0, 00:08:47.800 "state": "online", 00:08:47.800 "raid_level": "raid1", 00:08:47.800 "superblock": true, 00:08:47.800 "num_base_bdevs": 2, 00:08:47.800 "num_base_bdevs_discovered": 1, 00:08:47.800 "num_base_bdevs_operational": 1, 00:08:47.800 "base_bdevs_list": [ 00:08:47.800 { 00:08:47.800 "name": null, 00:08:47.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.800 "is_configured": false, 00:08:47.800 "data_offset": 0, 00:08:47.800 "data_size": 63488 00:08:47.800 }, 00:08:47.800 { 00:08:47.800 "name": "BaseBdev2", 00:08:47.800 "uuid": "44b8f95d-5e2b-4107-831d-25cdd43b763a", 00:08:47.800 "is_configured": true, 00:08:47.800 "data_offset": 2048, 00:08:47.800 "data_size": 63488 00:08:47.800 } 00:08:47.800 ] 00:08:47.800 }' 00:08:47.800 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.800 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.063 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.063 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.063 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.064 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.064 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.064 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.064 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.323 [2024-11-17 01:28:56.553648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.323 [2024-11-17 01:28:56.553771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.323 [2024-11-17 01:28:56.647865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.323 [2024-11-17 01:28:56.647929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.323 [2024-11-17 01:28:56.647941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62799 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62799 ']' 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62799 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62799 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.323 killing process with pid 62799 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62799' 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62799 00:08:48.323 [2024-11-17 01:28:56.743791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.323 01:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62799 00:08:48.323 [2024-11-17 01:28:56.761817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.701 01:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.701 00:08:49.701 real 0m4.757s 00:08:49.701 user 0m6.853s 00:08:49.701 sys 0m0.760s 00:08:49.701 01:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.701 01:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.701 ************************************ 00:08:49.701 END TEST raid_state_function_test_sb 00:08:49.701 ************************************ 00:08:49.701 01:28:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:49.701 01:28:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:49.701 01:28:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.701 01:28:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.701 ************************************ 00:08:49.701 START TEST raid_superblock_test 00:08:49.701 ************************************ 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63046 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63046 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63046 ']' 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.701 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.701 [2024-11-17 01:28:57.985171] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:49.701 [2024-11-17 01:28:57.985284] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63046 ] 00:08:49.701 [2024-11-17 01:28:58.159353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.961 [2024-11-17 01:28:58.270347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.220 [2024-11-17 01:28:58.461490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.220 [2024-11-17 01:28:58.461565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 malloc1 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.480 [2024-11-17 01:28:58.847524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.480 [2024-11-17 01:28:58.847586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.480 [2024-11-17 01:28:58.847609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:50.480 [2024-11-17 01:28:58.847618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.480 [2024-11-17 01:28:58.849707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.480 [2024-11-17 01:28:58.849739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.480 pt1 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.480 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.481 malloc2 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.481 [2024-11-17 01:28:58.903022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.481 [2024-11-17 01:28:58.903090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.481 [2024-11-17 01:28:58.903110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:50.481 [2024-11-17 01:28:58.903118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.481 [2024-11-17 01:28:58.905098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.481 [2024-11-17 01:28:58.905129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.481 pt2 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.481 [2024-11-17 01:28:58.915079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.481 [2024-11-17 01:28:58.916910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.481 [2024-11-17 01:28:58.917061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:50.481 [2024-11-17 01:28:58.917078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.481 [2024-11-17 01:28:58.917292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.481 [2024-11-17 01:28:58.917434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:50.481 [2024-11-17 01:28:58.917453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:50.481 [2024-11-17 01:28:58.917577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.481 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.740 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.740 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.740 "name": "raid_bdev1", 00:08:50.740 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:50.740 "strip_size_kb": 0, 00:08:50.740 "state": "online", 00:08:50.740 "raid_level": "raid1", 00:08:50.740 "superblock": true, 00:08:50.740 "num_base_bdevs": 2, 00:08:50.740 "num_base_bdevs_discovered": 2, 00:08:50.740 "num_base_bdevs_operational": 2, 00:08:50.740 "base_bdevs_list": [ 00:08:50.740 { 00:08:50.740 "name": "pt1", 00:08:50.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.740 "is_configured": true, 00:08:50.740 "data_offset": 2048, 00:08:50.740 "data_size": 63488 00:08:50.740 }, 00:08:50.740 { 00:08:50.740 "name": "pt2", 00:08:50.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.740 "is_configured": true, 00:08:50.740 "data_offset": 2048, 00:08:50.740 "data_size": 63488 00:08:50.740 } 00:08:50.740 ] 00:08:50.740 }' 00:08:50.740 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.740 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.999 [2024-11-17 01:28:59.370557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.999 "name": "raid_bdev1", 00:08:50.999 "aliases": [ 00:08:50.999 "279950fb-df03-4edf-959f-0592254974e4" 00:08:50.999 ], 00:08:50.999 "product_name": "Raid Volume", 00:08:50.999 "block_size": 512, 00:08:50.999 "num_blocks": 63488, 00:08:50.999 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:50.999 "assigned_rate_limits": { 00:08:50.999 "rw_ios_per_sec": 0, 00:08:50.999 "rw_mbytes_per_sec": 0, 00:08:50.999 "r_mbytes_per_sec": 0, 00:08:50.999 "w_mbytes_per_sec": 0 00:08:50.999 }, 00:08:50.999 "claimed": false, 00:08:50.999 "zoned": false, 00:08:50.999 "supported_io_types": { 00:08:50.999 "read": true, 00:08:50.999 "write": true, 00:08:50.999 "unmap": false, 00:08:50.999 "flush": false, 00:08:50.999 "reset": true, 00:08:50.999 "nvme_admin": false, 00:08:50.999 "nvme_io": false, 00:08:50.999 "nvme_io_md": false, 00:08:50.999 "write_zeroes": true, 00:08:50.999 "zcopy": false, 00:08:50.999 "get_zone_info": false, 00:08:50.999 "zone_management": false, 00:08:50.999 "zone_append": false, 00:08:50.999 "compare": false, 00:08:50.999 "compare_and_write": false, 00:08:50.999 "abort": false, 00:08:50.999 "seek_hole": false, 00:08:50.999 "seek_data": false, 00:08:50.999 "copy": false, 00:08:50.999 "nvme_iov_md": false 00:08:50.999 }, 00:08:50.999 "memory_domains": [ 00:08:50.999 { 00:08:50.999 "dma_device_id": "system", 00:08:50.999 "dma_device_type": 1 00:08:50.999 }, 00:08:50.999 { 00:08:50.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.999 "dma_device_type": 2 00:08:50.999 }, 00:08:50.999 { 00:08:50.999 "dma_device_id": "system", 00:08:50.999 "dma_device_type": 1 00:08:50.999 }, 00:08:50.999 { 00:08:50.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.999 "dma_device_type": 2 00:08:50.999 } 00:08:50.999 ], 00:08:50.999 "driver_specific": { 00:08:50.999 "raid": { 00:08:50.999 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:50.999 "strip_size_kb": 0, 00:08:50.999 "state": "online", 00:08:50.999 "raid_level": "raid1", 00:08:50.999 "superblock": true, 00:08:50.999 "num_base_bdevs": 2, 00:08:50.999 "num_base_bdevs_discovered": 2, 00:08:50.999 "num_base_bdevs_operational": 2, 00:08:50.999 "base_bdevs_list": [ 00:08:50.999 { 00:08:50.999 "name": "pt1", 00:08:50.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.999 "is_configured": true, 00:08:50.999 "data_offset": 2048, 00:08:50.999 "data_size": 63488 00:08:50.999 }, 00:08:50.999 { 00:08:50.999 "name": "pt2", 00:08:50.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.999 "is_configured": true, 00:08:50.999 "data_offset": 2048, 00:08:50.999 "data_size": 63488 00:08:50.999 } 00:08:50.999 ] 00:08:50.999 } 00:08:50.999 } 00:08:50.999 }' 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.999 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:50.999 pt2' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 [2024-11-17 01:28:59.574124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=279950fb-df03-4edf-959f-0592254974e4 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 279950fb-df03-4edf-959f-0592254974e4 ']' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 [2024-11-17 01:28:59.617820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.258 [2024-11-17 01:28:59.617844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.258 [2024-11-17 01:28:59.617912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.258 [2024-11-17 01:28:59.617970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.258 [2024-11-17 01:28:59.617984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.258 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.517 [2024-11-17 01:28:59.733640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:51.517 [2024-11-17 01:28:59.735418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:51.517 [2024-11-17 01:28:59.735482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:51.517 [2024-11-17 01:28:59.735522] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:51.517 [2024-11-17 01:28:59.735536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.517 [2024-11-17 01:28:59.735545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:51.517 request: 00:08:51.517 { 00:08:51.517 "name": "raid_bdev1", 00:08:51.517 "raid_level": "raid1", 00:08:51.517 "base_bdevs": [ 00:08:51.517 "malloc1", 00:08:51.517 "malloc2" 00:08:51.517 ], 00:08:51.517 "superblock": false, 00:08:51.517 "method": "bdev_raid_create", 00:08:51.517 "req_id": 1 00:08:51.517 } 00:08:51.517 Got JSON-RPC error response 00:08:51.517 response: 00:08:51.517 { 00:08:51.517 "code": -17, 00:08:51.517 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:51.517 } 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.517 [2024-11-17 01:28:59.797515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.517 [2024-11-17 01:28:59.797560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.517 [2024-11-17 01:28:59.797574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:51.517 [2024-11-17 01:28:59.797584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.517 [2024-11-17 01:28:59.799611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.517 [2024-11-17 01:28:59.799647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.517 [2024-11-17 01:28:59.799719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:51.517 [2024-11-17 01:28:59.799789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.517 pt1 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.517 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.518 "name": "raid_bdev1", 00:08:51.518 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:51.518 "strip_size_kb": 0, 00:08:51.518 "state": "configuring", 00:08:51.518 "raid_level": "raid1", 00:08:51.518 "superblock": true, 00:08:51.518 "num_base_bdevs": 2, 00:08:51.518 "num_base_bdevs_discovered": 1, 00:08:51.518 "num_base_bdevs_operational": 2, 00:08:51.518 "base_bdevs_list": [ 00:08:51.518 { 00:08:51.518 "name": "pt1", 00:08:51.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.518 "is_configured": true, 00:08:51.518 "data_offset": 2048, 00:08:51.518 "data_size": 63488 00:08:51.518 }, 00:08:51.518 { 00:08:51.518 "name": null, 00:08:51.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.518 "is_configured": false, 00:08:51.518 "data_offset": 2048, 00:08:51.518 "data_size": 63488 00:08:51.518 } 00:08:51.518 ] 00:08:51.518 }' 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.518 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.115 [2024-11-17 01:29:00.244777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.115 [2024-11-17 01:29:00.244837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.115 [2024-11-17 01:29:00.244858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:52.115 [2024-11-17 01:29:00.244869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.115 [2024-11-17 01:29:00.245287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.115 [2024-11-17 01:29:00.245312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.115 [2024-11-17 01:29:00.245383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.115 [2024-11-17 01:29:00.245404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.115 [2024-11-17 01:29:00.245518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:52.115 [2024-11-17 01:29:00.245530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.115 [2024-11-17 01:29:00.245754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:52.115 [2024-11-17 01:29:00.245910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:52.115 [2024-11-17 01:29:00.245920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:52.115 [2024-11-17 01:29:00.246071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.115 pt2 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.115 "name": "raid_bdev1", 00:08:52.115 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:52.115 "strip_size_kb": 0, 00:08:52.115 "state": "online", 00:08:52.115 "raid_level": "raid1", 00:08:52.115 "superblock": true, 00:08:52.115 "num_base_bdevs": 2, 00:08:52.115 "num_base_bdevs_discovered": 2, 00:08:52.115 "num_base_bdevs_operational": 2, 00:08:52.115 "base_bdevs_list": [ 00:08:52.115 { 00:08:52.115 "name": "pt1", 00:08:52.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.115 "is_configured": true, 00:08:52.115 "data_offset": 2048, 00:08:52.115 "data_size": 63488 00:08:52.115 }, 00:08:52.115 { 00:08:52.115 "name": "pt2", 00:08:52.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.115 "is_configured": true, 00:08:52.115 "data_offset": 2048, 00:08:52.115 "data_size": 63488 00:08:52.115 } 00:08:52.115 ] 00:08:52.115 }' 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.115 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.375 [2024-11-17 01:29:00.676299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.375 "name": "raid_bdev1", 00:08:52.375 "aliases": [ 00:08:52.375 "279950fb-df03-4edf-959f-0592254974e4" 00:08:52.375 ], 00:08:52.375 "product_name": "Raid Volume", 00:08:52.375 "block_size": 512, 00:08:52.375 "num_blocks": 63488, 00:08:52.375 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:52.375 "assigned_rate_limits": { 00:08:52.375 "rw_ios_per_sec": 0, 00:08:52.375 "rw_mbytes_per_sec": 0, 00:08:52.375 "r_mbytes_per_sec": 0, 00:08:52.375 "w_mbytes_per_sec": 0 00:08:52.375 }, 00:08:52.375 "claimed": false, 00:08:52.375 "zoned": false, 00:08:52.375 "supported_io_types": { 00:08:52.375 "read": true, 00:08:52.375 "write": true, 00:08:52.375 "unmap": false, 00:08:52.375 "flush": false, 00:08:52.375 "reset": true, 00:08:52.375 "nvme_admin": false, 00:08:52.375 "nvme_io": false, 00:08:52.375 "nvme_io_md": false, 00:08:52.375 "write_zeroes": true, 00:08:52.375 "zcopy": false, 00:08:52.375 "get_zone_info": false, 00:08:52.375 "zone_management": false, 00:08:52.375 "zone_append": false, 00:08:52.375 "compare": false, 00:08:52.375 "compare_and_write": false, 00:08:52.375 "abort": false, 00:08:52.375 "seek_hole": false, 00:08:52.375 "seek_data": false, 00:08:52.375 "copy": false, 00:08:52.375 "nvme_iov_md": false 00:08:52.375 }, 00:08:52.375 "memory_domains": [ 00:08:52.375 { 00:08:52.375 "dma_device_id": "system", 00:08:52.375 "dma_device_type": 1 00:08:52.375 }, 00:08:52.375 { 00:08:52.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.375 "dma_device_type": 2 00:08:52.375 }, 00:08:52.375 { 00:08:52.375 "dma_device_id": "system", 00:08:52.375 "dma_device_type": 1 00:08:52.375 }, 00:08:52.375 { 00:08:52.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.375 "dma_device_type": 2 00:08:52.375 } 00:08:52.375 ], 00:08:52.375 "driver_specific": { 00:08:52.375 "raid": { 00:08:52.375 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:52.375 "strip_size_kb": 0, 00:08:52.375 "state": "online", 00:08:52.375 "raid_level": "raid1", 00:08:52.375 "superblock": true, 00:08:52.375 "num_base_bdevs": 2, 00:08:52.375 "num_base_bdevs_discovered": 2, 00:08:52.375 "num_base_bdevs_operational": 2, 00:08:52.375 "base_bdevs_list": [ 00:08:52.375 { 00:08:52.375 "name": "pt1", 00:08:52.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.375 "is_configured": true, 00:08:52.375 "data_offset": 2048, 00:08:52.375 "data_size": 63488 00:08:52.375 }, 00:08:52.375 { 00:08:52.375 "name": "pt2", 00:08:52.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.375 "is_configured": true, 00:08:52.375 "data_offset": 2048, 00:08:52.375 "data_size": 63488 00:08:52.375 } 00:08:52.375 ] 00:08:52.375 } 00:08:52.375 } 00:08:52.375 }' 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.375 pt2' 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.375 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.376 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:52.376 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.376 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.635 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:52.636 [2024-11-17 01:29:00.915872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 279950fb-df03-4edf-959f-0592254974e4 '!=' 279950fb-df03-4edf-959f-0592254974e4 ']' 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.636 [2024-11-17 01:29:00.963577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.636 01:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.636 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.636 "name": "raid_bdev1", 00:08:52.636 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:52.636 "strip_size_kb": 0, 00:08:52.636 "state": "online", 00:08:52.636 "raid_level": "raid1", 00:08:52.636 "superblock": true, 00:08:52.636 "num_base_bdevs": 2, 00:08:52.636 "num_base_bdevs_discovered": 1, 00:08:52.636 "num_base_bdevs_operational": 1, 00:08:52.636 "base_bdevs_list": [ 00:08:52.636 { 00:08:52.636 "name": null, 00:08:52.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.636 "is_configured": false, 00:08:52.636 "data_offset": 0, 00:08:52.636 "data_size": 63488 00:08:52.636 }, 00:08:52.636 { 00:08:52.636 "name": "pt2", 00:08:52.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.636 "is_configured": true, 00:08:52.636 "data_offset": 2048, 00:08:52.636 "data_size": 63488 00:08:52.636 } 00:08:52.636 ] 00:08:52.636 }' 00:08:52.636 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.636 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.205 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.205 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.205 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.205 [2024-11-17 01:29:01.382913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.205 [2024-11-17 01:29:01.382946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.205 [2024-11-17 01:29:01.383040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.205 [2024-11-17 01:29:01.383085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.206 [2024-11-17 01:29:01.383096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.206 [2024-11-17 01:29:01.454745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.206 [2024-11-17 01:29:01.454813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.206 [2024-11-17 01:29:01.454834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:53.206 [2024-11-17 01:29:01.454845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.206 [2024-11-17 01:29:01.457017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.206 [2024-11-17 01:29:01.457051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.206 [2024-11-17 01:29:01.457129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.206 [2024-11-17 01:29:01.457194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.206 [2024-11-17 01:29:01.457303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.206 [2024-11-17 01:29:01.457318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.206 [2024-11-17 01:29:01.457546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:53.206 [2024-11-17 01:29:01.457712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.206 [2024-11-17 01:29:01.457725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:53.206 [2024-11-17 01:29:01.457884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.206 pt2 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.206 "name": "raid_bdev1", 00:08:53.206 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:53.206 "strip_size_kb": 0, 00:08:53.206 "state": "online", 00:08:53.206 "raid_level": "raid1", 00:08:53.206 "superblock": true, 00:08:53.206 "num_base_bdevs": 2, 00:08:53.206 "num_base_bdevs_discovered": 1, 00:08:53.206 "num_base_bdevs_operational": 1, 00:08:53.206 "base_bdevs_list": [ 00:08:53.206 { 00:08:53.206 "name": null, 00:08:53.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.206 "is_configured": false, 00:08:53.206 "data_offset": 2048, 00:08:53.206 "data_size": 63488 00:08:53.206 }, 00:08:53.206 { 00:08:53.206 "name": "pt2", 00:08:53.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.206 "is_configured": true, 00:08:53.206 "data_offset": 2048, 00:08:53.206 "data_size": 63488 00:08:53.206 } 00:08:53.206 ] 00:08:53.206 }' 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.206 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.466 [2024-11-17 01:29:01.810131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.466 [2024-11-17 01:29:01.810164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.466 [2024-11-17 01:29:01.810261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.466 [2024-11-17 01:29:01.810314] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.466 [2024-11-17 01:29:01.810324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.466 [2024-11-17 01:29:01.870035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:53.466 [2024-11-17 01:29:01.870109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.466 [2024-11-17 01:29:01.870129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:53.466 [2024-11-17 01:29:01.870139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.466 [2024-11-17 01:29:01.872546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.466 [2024-11-17 01:29:01.872579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:53.466 [2024-11-17 01:29:01.872666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:53.466 [2024-11-17 01:29:01.872716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:53.466 [2024-11-17 01:29:01.872876] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:53.466 [2024-11-17 01:29:01.872891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.466 [2024-11-17 01:29:01.872906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:53.466 [2024-11-17 01:29:01.872970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.466 [2024-11-17 01:29:01.873049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:53.466 [2024-11-17 01:29:01.873057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.466 [2024-11-17 01:29:01.873329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:53.466 [2024-11-17 01:29:01.873499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:53.466 [2024-11-17 01:29:01.873518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:53.466 [2024-11-17 01:29:01.873697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.466 pt1 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.466 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.467 "name": "raid_bdev1", 00:08:53.467 "uuid": "279950fb-df03-4edf-959f-0592254974e4", 00:08:53.467 "strip_size_kb": 0, 00:08:53.467 "state": "online", 00:08:53.467 "raid_level": "raid1", 00:08:53.467 "superblock": true, 00:08:53.467 "num_base_bdevs": 2, 00:08:53.467 "num_base_bdevs_discovered": 1, 00:08:53.467 "num_base_bdevs_operational": 1, 00:08:53.467 "base_bdevs_list": [ 00:08:53.467 { 00:08:53.467 "name": null, 00:08:53.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.467 "is_configured": false, 00:08:53.467 "data_offset": 2048, 00:08:53.467 "data_size": 63488 00:08:53.467 }, 00:08:53.467 { 00:08:53.467 "name": "pt2", 00:08:53.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.467 "is_configured": true, 00:08:53.467 "data_offset": 2048, 00:08:53.467 "data_size": 63488 00:08:53.467 } 00:08:53.467 ] 00:08:53.467 }' 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.467 01:29:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.038 [2024-11-17 01:29:02.317489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 279950fb-df03-4edf-959f-0592254974e4 '!=' 279950fb-df03-4edf-959f-0592254974e4 ']' 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63046 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63046 ']' 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63046 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63046 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.038 killing process with pid 63046 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63046' 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63046 00:08:54.038 [2024-11-17 01:29:02.386526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.038 [2024-11-17 01:29:02.386626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.038 [2024-11-17 01:29:02.386678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.038 [2024-11-17 01:29:02.386693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:54.038 01:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63046 00:08:54.318 [2024-11-17 01:29:02.594784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.257 01:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:55.257 00:08:55.257 real 0m5.757s 00:08:55.257 user 0m8.734s 00:08:55.257 sys 0m0.966s 00:08:55.257 01:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.257 01:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.257 ************************************ 00:08:55.257 END TEST raid_superblock_test 00:08:55.257 ************************************ 00:08:55.257 01:29:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:55.257 01:29:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.257 01:29:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.257 01:29:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.516 ************************************ 00:08:55.516 START TEST raid_read_error_test 00:08:55.516 ************************************ 00:08:55.516 01:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:55.516 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:55.516 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:55.516 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:55.516 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:55.516 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jnI05psrN5 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63370 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63370 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63370 ']' 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.517 01:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.517 [2024-11-17 01:29:03.817735] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:55.517 [2024-11-17 01:29:03.817872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63370 ] 00:08:55.776 [2024-11-17 01:29:03.992941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.776 [2024-11-17 01:29:04.102041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.035 [2024-11-17 01:29:04.296922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.035 [2024-11-17 01:29:04.296968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.294 BaseBdev1_malloc 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.294 true 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.294 [2024-11-17 01:29:04.705091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:56.294 [2024-11-17 01:29:04.705157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.294 [2024-11-17 01:29:04.705174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:56.294 [2024-11-17 01:29:04.705184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.294 [2024-11-17 01:29:04.707208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.294 [2024-11-17 01:29:04.707245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:56.294 BaseBdev1 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.294 BaseBdev2_malloc 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.294 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.554 true 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.554 [2024-11-17 01:29:04.770051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:56.554 [2024-11-17 01:29:04.770097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.554 [2024-11-17 01:29:04.770112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:56.554 [2024-11-17 01:29:04.770123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.554 [2024-11-17 01:29:04.772090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.554 [2024-11-17 01:29:04.772125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:56.554 BaseBdev2 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.554 [2024-11-17 01:29:04.782082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.554 [2024-11-17 01:29:04.783823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.554 [2024-11-17 01:29:04.784020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.554 [2024-11-17 01:29:04.784035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:56.554 [2024-11-17 01:29:04.784245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:56.554 [2024-11-17 01:29:04.784429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.554 [2024-11-17 01:29:04.784443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:56.554 [2024-11-17 01:29:04.784584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.554 "name": "raid_bdev1", 00:08:56.554 "uuid": "2ed82900-5bda-475d-b0ec-2c058d4f8595", 00:08:56.554 "strip_size_kb": 0, 00:08:56.554 "state": "online", 00:08:56.554 "raid_level": "raid1", 00:08:56.554 "superblock": true, 00:08:56.554 "num_base_bdevs": 2, 00:08:56.554 "num_base_bdevs_discovered": 2, 00:08:56.554 "num_base_bdevs_operational": 2, 00:08:56.554 "base_bdevs_list": [ 00:08:56.554 { 00:08:56.554 "name": "BaseBdev1", 00:08:56.554 "uuid": "2e35c935-8cd6-5a24-b06e-44fb144e22de", 00:08:56.554 "is_configured": true, 00:08:56.554 "data_offset": 2048, 00:08:56.554 "data_size": 63488 00:08:56.554 }, 00:08:56.554 { 00:08:56.554 "name": "BaseBdev2", 00:08:56.554 "uuid": "02942f88-7340-5988-9324-ea06018e79da", 00:08:56.554 "is_configured": true, 00:08:56.554 "data_offset": 2048, 00:08:56.554 "data_size": 63488 00:08:56.554 } 00:08:56.554 ] 00:08:56.554 }' 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.554 01:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.814 01:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:56.814 01:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:57.074 [2024-11-17 01:29:05.274503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.011 "name": "raid_bdev1", 00:08:58.011 "uuid": "2ed82900-5bda-475d-b0ec-2c058d4f8595", 00:08:58.011 "strip_size_kb": 0, 00:08:58.011 "state": "online", 00:08:58.011 "raid_level": "raid1", 00:08:58.011 "superblock": true, 00:08:58.011 "num_base_bdevs": 2, 00:08:58.011 "num_base_bdevs_discovered": 2, 00:08:58.011 "num_base_bdevs_operational": 2, 00:08:58.011 "base_bdevs_list": [ 00:08:58.011 { 00:08:58.011 "name": "BaseBdev1", 00:08:58.011 "uuid": "2e35c935-8cd6-5a24-b06e-44fb144e22de", 00:08:58.011 "is_configured": true, 00:08:58.011 "data_offset": 2048, 00:08:58.011 "data_size": 63488 00:08:58.011 }, 00:08:58.011 { 00:08:58.011 "name": "BaseBdev2", 00:08:58.011 "uuid": "02942f88-7340-5988-9324-ea06018e79da", 00:08:58.011 "is_configured": true, 00:08:58.011 "data_offset": 2048, 00:08:58.011 "data_size": 63488 00:08:58.011 } 00:08:58.011 ] 00:08:58.011 }' 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.011 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.271 [2024-11-17 01:29:06.619815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.271 [2024-11-17 01:29:06.619854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.271 [2024-11-17 01:29:06.622363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.271 [2024-11-17 01:29:06.622408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.271 [2024-11-17 01:29:06.622490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.271 [2024-11-17 01:29:06.622506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.271 { 00:08:58.271 "results": [ 00:08:58.271 { 00:08:58.271 "job": "raid_bdev1", 00:08:58.271 "core_mask": "0x1", 00:08:58.271 "workload": "randrw", 00:08:58.271 "percentage": 50, 00:08:58.271 "status": "finished", 00:08:58.271 "queue_depth": 1, 00:08:58.271 "io_size": 131072, 00:08:58.271 "runtime": 1.346219, 00:08:58.271 "iops": 18839.43102868107, 00:08:58.271 "mibps": 2354.9288785851336, 00:08:58.271 "io_failed": 0, 00:08:58.271 "io_timeout": 0, 00:08:58.271 "avg_latency_us": 50.61083923994533, 00:08:58.271 "min_latency_us": 21.799126637554586, 00:08:58.271 "max_latency_us": 1402.2986899563318 00:08:58.271 } 00:08:58.271 ], 00:08:58.271 "core_count": 1 00:08:58.271 } 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63370 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63370 ']' 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63370 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63370 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.271 killing process with pid 63370 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63370' 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63370 00:08:58.271 [2024-11-17 01:29:06.664123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.271 01:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63370 00:08:58.531 [2024-11-17 01:29:06.800507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.469 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:59.469 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jnI05psrN5 00:08:59.469 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:59.729 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:59.729 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:59.729 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.729 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:59.729 01:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:59.729 00:08:59.729 real 0m4.217s 00:08:59.729 user 0m5.024s 00:08:59.729 sys 0m0.507s 00:08:59.729 01:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.729 01:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.729 ************************************ 00:08:59.729 END TEST raid_read_error_test 00:08:59.729 ************************************ 00:08:59.729 01:29:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:59.729 01:29:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:59.729 01:29:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.729 01:29:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.729 ************************************ 00:08:59.729 START TEST raid_write_error_test 00:08:59.729 ************************************ 00:08:59.729 01:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:59.729 01:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:59.729 01:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:59.729 01:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:59.729 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:59.729 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.729 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:59.729 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GFtGtY47vX 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63510 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63510 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63510 ']' 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.730 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.730 [2024-11-17 01:29:08.101218] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:59.730 [2024-11-17 01:29:08.101342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63510 ] 00:08:59.990 [2024-11-17 01:29:08.273958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.990 [2024-11-17 01:29:08.382360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.270 [2024-11-17 01:29:08.568422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.270 [2024-11-17 01:29:08.568467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 BaseBdev1_malloc 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 true 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.540 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.540 [2024-11-17 01:29:08.973072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:00.541 [2024-11-17 01:29:08.973133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.541 [2024-11-17 01:29:08.973151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:00.541 [2024-11-17 01:29:08.973162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.541 [2024-11-17 01:29:08.975206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.541 [2024-11-17 01:29:08.975241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:00.541 BaseBdev1 00:09:00.541 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.541 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.541 01:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:00.541 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.541 01:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 BaseBdev2_malloc 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 true 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.809 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.809 [2024-11-17 01:29:09.035350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:00.809 [2024-11-17 01:29:09.035396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.809 [2024-11-17 01:29:09.035412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:00.810 [2024-11-17 01:29:09.035421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.810 [2024-11-17 01:29:09.037386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.810 [2024-11-17 01:29:09.037420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:00.810 BaseBdev2 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.810 [2024-11-17 01:29:09.047380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.810 [2024-11-17 01:29:09.049100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.810 [2024-11-17 01:29:09.049292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.810 [2024-11-17 01:29:09.049314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:00.810 [2024-11-17 01:29:09.049522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:00.810 [2024-11-17 01:29:09.049689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.810 [2024-11-17 01:29:09.049705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:00.810 [2024-11-17 01:29:09.049853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.810 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.811 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.811 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.811 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.811 "name": "raid_bdev1", 00:09:00.811 "uuid": "c395e263-f2bf-4fd1-a4e1-d1a135eb1402", 00:09:00.811 "strip_size_kb": 0, 00:09:00.811 "state": "online", 00:09:00.811 "raid_level": "raid1", 00:09:00.811 "superblock": true, 00:09:00.811 "num_base_bdevs": 2, 00:09:00.811 "num_base_bdevs_discovered": 2, 00:09:00.811 "num_base_bdevs_operational": 2, 00:09:00.811 "base_bdevs_list": [ 00:09:00.811 { 00:09:00.811 "name": "BaseBdev1", 00:09:00.811 "uuid": "956d73f3-9a94-5b54-b78f-6c3288792662", 00:09:00.811 "is_configured": true, 00:09:00.811 "data_offset": 2048, 00:09:00.811 "data_size": 63488 00:09:00.811 }, 00:09:00.811 { 00:09:00.811 "name": "BaseBdev2", 00:09:00.811 "uuid": "eeda4277-8389-5d86-bc83-41b6af76e9eb", 00:09:00.811 "is_configured": true, 00:09:00.811 "data_offset": 2048, 00:09:00.811 "data_size": 63488 00:09:00.811 } 00:09:00.811 ] 00:09:00.811 }' 00:09:00.811 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.811 01:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.076 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:01.076 01:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:01.076 [2024-11-17 01:29:09.527715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.015 [2024-11-17 01:29:10.443629] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:02.015 [2024-11-17 01:29:10.443780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.015 [2024-11-17 01:29:10.444005] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.015 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.276 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.276 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.276 "name": "raid_bdev1", 00:09:02.276 "uuid": "c395e263-f2bf-4fd1-a4e1-d1a135eb1402", 00:09:02.276 "strip_size_kb": 0, 00:09:02.276 "state": "online", 00:09:02.276 "raid_level": "raid1", 00:09:02.276 "superblock": true, 00:09:02.276 "num_base_bdevs": 2, 00:09:02.276 "num_base_bdevs_discovered": 1, 00:09:02.276 "num_base_bdevs_operational": 1, 00:09:02.276 "base_bdevs_list": [ 00:09:02.276 { 00:09:02.276 "name": null, 00:09:02.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.276 "is_configured": false, 00:09:02.276 "data_offset": 0, 00:09:02.276 "data_size": 63488 00:09:02.276 }, 00:09:02.276 { 00:09:02.276 "name": "BaseBdev2", 00:09:02.276 "uuid": "eeda4277-8389-5d86-bc83-41b6af76e9eb", 00:09:02.276 "is_configured": true, 00:09:02.276 "data_offset": 2048, 00:09:02.276 "data_size": 63488 00:09:02.276 } 00:09:02.276 ] 00:09:02.276 }' 00:09:02.276 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.276 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.536 [2024-11-17 01:29:10.896532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.536 [2024-11-17 01:29:10.896565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.536 [2024-11-17 01:29:10.899110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.536 [2024-11-17 01:29:10.899202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.536 [2024-11-17 01:29:10.899269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.536 [2024-11-17 01:29:10.899279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:02.536 { 00:09:02.536 "results": [ 00:09:02.536 { 00:09:02.536 "job": "raid_bdev1", 00:09:02.536 "core_mask": "0x1", 00:09:02.536 "workload": "randrw", 00:09:02.536 "percentage": 50, 00:09:02.536 "status": "finished", 00:09:02.536 "queue_depth": 1, 00:09:02.536 "io_size": 131072, 00:09:02.536 "runtime": 1.369654, 00:09:02.536 "iops": 22352.360523168623, 00:09:02.536 "mibps": 2794.045065396078, 00:09:02.536 "io_failed": 0, 00:09:02.536 "io_timeout": 0, 00:09:02.536 "avg_latency_us": 42.25308055317234, 00:09:02.536 "min_latency_us": 21.016593886462882, 00:09:02.536 "max_latency_us": 1359.3711790393013 00:09:02.536 } 00:09:02.536 ], 00:09:02.536 "core_count": 1 00:09:02.536 } 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63510 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63510 ']' 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63510 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63510 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.536 killing process with pid 63510 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63510' 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63510 00:09:02.536 [2024-11-17 01:29:10.943705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.536 01:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63510 00:09:02.796 [2024-11-17 01:29:11.077266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GFtGtY47vX 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:03.735 00:09:03.735 real 0m4.194s 00:09:03.735 user 0m4.989s 00:09:03.735 sys 0m0.545s 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.735 ************************************ 00:09:03.735 END TEST raid_write_error_test 00:09:03.735 ************************************ 00:09:03.735 01:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.995 01:29:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:03.995 01:29:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:03.995 01:29:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:03.995 01:29:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:03.995 01:29:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.995 01:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.995 ************************************ 00:09:03.995 START TEST raid_state_function_test 00:09:03.995 ************************************ 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63654 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63654' 00:09:03.995 Process raid pid: 63654 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63654 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63654 ']' 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.995 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.995 [2024-11-17 01:29:12.357716] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:03.995 [2024-11-17 01:29:12.357936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.255 [2024-11-17 01:29:12.530858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.255 [2024-11-17 01:29:12.639839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.515 [2024-11-17 01:29:12.830981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.515 [2024-11-17 01:29:12.831105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.774 [2024-11-17 01:29:13.186846] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.774 [2024-11-17 01:29:13.186892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.774 [2024-11-17 01:29:13.186902] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.774 [2024-11-17 01:29:13.186911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.774 [2024-11-17 01:29:13.186918] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.774 [2024-11-17 01:29:13.186926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.774 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.775 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.775 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.775 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.034 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.034 "name": "Existed_Raid", 00:09:05.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.034 "strip_size_kb": 64, 00:09:05.034 "state": "configuring", 00:09:05.034 "raid_level": "raid0", 00:09:05.034 "superblock": false, 00:09:05.034 "num_base_bdevs": 3, 00:09:05.034 "num_base_bdevs_discovered": 0, 00:09:05.034 "num_base_bdevs_operational": 3, 00:09:05.034 "base_bdevs_list": [ 00:09:05.034 { 00:09:05.034 "name": "BaseBdev1", 00:09:05.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.034 "is_configured": false, 00:09:05.034 "data_offset": 0, 00:09:05.034 "data_size": 0 00:09:05.034 }, 00:09:05.034 { 00:09:05.034 "name": "BaseBdev2", 00:09:05.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.034 "is_configured": false, 00:09:05.034 "data_offset": 0, 00:09:05.034 "data_size": 0 00:09:05.034 }, 00:09:05.034 { 00:09:05.034 "name": "BaseBdev3", 00:09:05.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.034 "is_configured": false, 00:09:05.034 "data_offset": 0, 00:09:05.034 "data_size": 0 00:09:05.034 } 00:09:05.034 ] 00:09:05.034 }' 00:09:05.034 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.034 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.294 [2024-11-17 01:29:13.582118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.294 [2024-11-17 01:29:13.582157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.294 [2024-11-17 01:29:13.594072] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.294 [2024-11-17 01:29:13.594560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.294 [2024-11-17 01:29:13.594586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.294 [2024-11-17 01:29:13.594602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.294 [2024-11-17 01:29:13.594609] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.294 [2024-11-17 01:29:13.594619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.294 [2024-11-17 01:29:13.641452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.294 BaseBdev1 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.294 [ 00:09:05.294 { 00:09:05.294 "name": "BaseBdev1", 00:09:05.294 "aliases": [ 00:09:05.294 "173ccc35-3db7-42aa-998a-d90d9973d6d0" 00:09:05.294 ], 00:09:05.294 "product_name": "Malloc disk", 00:09:05.294 "block_size": 512, 00:09:05.294 "num_blocks": 65536, 00:09:05.294 "uuid": "173ccc35-3db7-42aa-998a-d90d9973d6d0", 00:09:05.294 "assigned_rate_limits": { 00:09:05.294 "rw_ios_per_sec": 0, 00:09:05.294 "rw_mbytes_per_sec": 0, 00:09:05.294 "r_mbytes_per_sec": 0, 00:09:05.294 "w_mbytes_per_sec": 0 00:09:05.294 }, 00:09:05.294 "claimed": true, 00:09:05.294 "claim_type": "exclusive_write", 00:09:05.294 "zoned": false, 00:09:05.294 "supported_io_types": { 00:09:05.294 "read": true, 00:09:05.294 "write": true, 00:09:05.294 "unmap": true, 00:09:05.294 "flush": true, 00:09:05.294 "reset": true, 00:09:05.294 "nvme_admin": false, 00:09:05.294 "nvme_io": false, 00:09:05.294 "nvme_io_md": false, 00:09:05.294 "write_zeroes": true, 00:09:05.294 "zcopy": true, 00:09:05.294 "get_zone_info": false, 00:09:05.294 "zone_management": false, 00:09:05.294 "zone_append": false, 00:09:05.294 "compare": false, 00:09:05.294 "compare_and_write": false, 00:09:05.294 "abort": true, 00:09:05.294 "seek_hole": false, 00:09:05.294 "seek_data": false, 00:09:05.294 "copy": true, 00:09:05.294 "nvme_iov_md": false 00:09:05.294 }, 00:09:05.294 "memory_domains": [ 00:09:05.294 { 00:09:05.294 "dma_device_id": "system", 00:09:05.294 "dma_device_type": 1 00:09:05.294 }, 00:09:05.294 { 00:09:05.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.294 "dma_device_type": 2 00:09:05.294 } 00:09:05.294 ], 00:09:05.294 "driver_specific": {} 00:09:05.294 } 00:09:05.294 ] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.294 "name": "Existed_Raid", 00:09:05.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.294 "strip_size_kb": 64, 00:09:05.294 "state": "configuring", 00:09:05.294 "raid_level": "raid0", 00:09:05.294 "superblock": false, 00:09:05.294 "num_base_bdevs": 3, 00:09:05.294 "num_base_bdevs_discovered": 1, 00:09:05.294 "num_base_bdevs_operational": 3, 00:09:05.294 "base_bdevs_list": [ 00:09:05.294 { 00:09:05.294 "name": "BaseBdev1", 00:09:05.294 "uuid": "173ccc35-3db7-42aa-998a-d90d9973d6d0", 00:09:05.294 "is_configured": true, 00:09:05.294 "data_offset": 0, 00:09:05.294 "data_size": 65536 00:09:05.294 }, 00:09:05.294 { 00:09:05.294 "name": "BaseBdev2", 00:09:05.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.294 "is_configured": false, 00:09:05.294 "data_offset": 0, 00:09:05.294 "data_size": 0 00:09:05.294 }, 00:09:05.294 { 00:09:05.294 "name": "BaseBdev3", 00:09:05.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.294 "is_configured": false, 00:09:05.294 "data_offset": 0, 00:09:05.294 "data_size": 0 00:09:05.294 } 00:09:05.294 ] 00:09:05.294 }' 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.294 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.863 [2024-11-17 01:29:14.092714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.863 [2024-11-17 01:29:14.092791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.863 [2024-11-17 01:29:14.100747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.863 [2024-11-17 01:29:14.102493] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.863 [2024-11-17 01:29:14.102535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.863 [2024-11-17 01:29:14.102545] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.863 [2024-11-17 01:29:14.102553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.863 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.863 "name": "Existed_Raid", 00:09:05.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.863 "strip_size_kb": 64, 00:09:05.863 "state": "configuring", 00:09:05.863 "raid_level": "raid0", 00:09:05.863 "superblock": false, 00:09:05.863 "num_base_bdevs": 3, 00:09:05.863 "num_base_bdevs_discovered": 1, 00:09:05.863 "num_base_bdevs_operational": 3, 00:09:05.863 "base_bdevs_list": [ 00:09:05.863 { 00:09:05.864 "name": "BaseBdev1", 00:09:05.864 "uuid": "173ccc35-3db7-42aa-998a-d90d9973d6d0", 00:09:05.864 "is_configured": true, 00:09:05.864 "data_offset": 0, 00:09:05.864 "data_size": 65536 00:09:05.864 }, 00:09:05.864 { 00:09:05.864 "name": "BaseBdev2", 00:09:05.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.864 "is_configured": false, 00:09:05.864 "data_offset": 0, 00:09:05.864 "data_size": 0 00:09:05.864 }, 00:09:05.864 { 00:09:05.864 "name": "BaseBdev3", 00:09:05.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.864 "is_configured": false, 00:09:05.864 "data_offset": 0, 00:09:05.864 "data_size": 0 00:09:05.864 } 00:09:05.864 ] 00:09:05.864 }' 00:09:05.864 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.864 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.442 [2024-11-17 01:29:14.618104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.442 BaseBdev2 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.442 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.442 [ 00:09:06.442 { 00:09:06.442 "name": "BaseBdev2", 00:09:06.442 "aliases": [ 00:09:06.442 "4a468ee5-67d7-476b-82e2-4a2849af3e89" 00:09:06.442 ], 00:09:06.442 "product_name": "Malloc disk", 00:09:06.442 "block_size": 512, 00:09:06.442 "num_blocks": 65536, 00:09:06.442 "uuid": "4a468ee5-67d7-476b-82e2-4a2849af3e89", 00:09:06.442 "assigned_rate_limits": { 00:09:06.442 "rw_ios_per_sec": 0, 00:09:06.442 "rw_mbytes_per_sec": 0, 00:09:06.442 "r_mbytes_per_sec": 0, 00:09:06.442 "w_mbytes_per_sec": 0 00:09:06.442 }, 00:09:06.442 "claimed": true, 00:09:06.442 "claim_type": "exclusive_write", 00:09:06.442 "zoned": false, 00:09:06.442 "supported_io_types": { 00:09:06.442 "read": true, 00:09:06.442 "write": true, 00:09:06.442 "unmap": true, 00:09:06.442 "flush": true, 00:09:06.443 "reset": true, 00:09:06.443 "nvme_admin": false, 00:09:06.443 "nvme_io": false, 00:09:06.443 "nvme_io_md": false, 00:09:06.443 "write_zeroes": true, 00:09:06.443 "zcopy": true, 00:09:06.443 "get_zone_info": false, 00:09:06.443 "zone_management": false, 00:09:06.443 "zone_append": false, 00:09:06.443 "compare": false, 00:09:06.443 "compare_and_write": false, 00:09:06.443 "abort": true, 00:09:06.443 "seek_hole": false, 00:09:06.443 "seek_data": false, 00:09:06.443 "copy": true, 00:09:06.443 "nvme_iov_md": false 00:09:06.443 }, 00:09:06.443 "memory_domains": [ 00:09:06.443 { 00:09:06.443 "dma_device_id": "system", 00:09:06.443 "dma_device_type": 1 00:09:06.443 }, 00:09:06.443 { 00:09:06.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.443 "dma_device_type": 2 00:09:06.443 } 00:09:06.443 ], 00:09:06.443 "driver_specific": {} 00:09:06.443 } 00:09:06.443 ] 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.443 "name": "Existed_Raid", 00:09:06.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.443 "strip_size_kb": 64, 00:09:06.443 "state": "configuring", 00:09:06.443 "raid_level": "raid0", 00:09:06.443 "superblock": false, 00:09:06.443 "num_base_bdevs": 3, 00:09:06.443 "num_base_bdevs_discovered": 2, 00:09:06.443 "num_base_bdevs_operational": 3, 00:09:06.443 "base_bdevs_list": [ 00:09:06.443 { 00:09:06.443 "name": "BaseBdev1", 00:09:06.443 "uuid": "173ccc35-3db7-42aa-998a-d90d9973d6d0", 00:09:06.443 "is_configured": true, 00:09:06.443 "data_offset": 0, 00:09:06.443 "data_size": 65536 00:09:06.443 }, 00:09:06.443 { 00:09:06.443 "name": "BaseBdev2", 00:09:06.443 "uuid": "4a468ee5-67d7-476b-82e2-4a2849af3e89", 00:09:06.443 "is_configured": true, 00:09:06.443 "data_offset": 0, 00:09:06.443 "data_size": 65536 00:09:06.443 }, 00:09:06.443 { 00:09:06.443 "name": "BaseBdev3", 00:09:06.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.443 "is_configured": false, 00:09:06.443 "data_offset": 0, 00:09:06.443 "data_size": 0 00:09:06.443 } 00:09:06.443 ] 00:09:06.443 }' 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.443 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.721 [2024-11-17 01:29:15.088806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.721 [2024-11-17 01:29:15.088862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.721 [2024-11-17 01:29:15.088874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:06.721 [2024-11-17 01:29:15.089267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:06.721 [2024-11-17 01:29:15.089427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.721 [2024-11-17 01:29:15.089437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:06.721 [2024-11-17 01:29:15.089678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.721 BaseBdev3 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.721 [ 00:09:06.721 { 00:09:06.721 "name": "BaseBdev3", 00:09:06.721 "aliases": [ 00:09:06.721 "e9f3882c-7389-40dc-bee7-ecc651b77e8b" 00:09:06.721 ], 00:09:06.721 "product_name": "Malloc disk", 00:09:06.721 "block_size": 512, 00:09:06.721 "num_blocks": 65536, 00:09:06.721 "uuid": "e9f3882c-7389-40dc-bee7-ecc651b77e8b", 00:09:06.721 "assigned_rate_limits": { 00:09:06.721 "rw_ios_per_sec": 0, 00:09:06.721 "rw_mbytes_per_sec": 0, 00:09:06.721 "r_mbytes_per_sec": 0, 00:09:06.721 "w_mbytes_per_sec": 0 00:09:06.721 }, 00:09:06.721 "claimed": true, 00:09:06.721 "claim_type": "exclusive_write", 00:09:06.721 "zoned": false, 00:09:06.721 "supported_io_types": { 00:09:06.721 "read": true, 00:09:06.721 "write": true, 00:09:06.721 "unmap": true, 00:09:06.721 "flush": true, 00:09:06.721 "reset": true, 00:09:06.721 "nvme_admin": false, 00:09:06.721 "nvme_io": false, 00:09:06.721 "nvme_io_md": false, 00:09:06.721 "write_zeroes": true, 00:09:06.721 "zcopy": true, 00:09:06.721 "get_zone_info": false, 00:09:06.721 "zone_management": false, 00:09:06.721 "zone_append": false, 00:09:06.721 "compare": false, 00:09:06.721 "compare_and_write": false, 00:09:06.721 "abort": true, 00:09:06.721 "seek_hole": false, 00:09:06.721 "seek_data": false, 00:09:06.721 "copy": true, 00:09:06.721 "nvme_iov_md": false 00:09:06.721 }, 00:09:06.721 "memory_domains": [ 00:09:06.721 { 00:09:06.721 "dma_device_id": "system", 00:09:06.721 "dma_device_type": 1 00:09:06.721 }, 00:09:06.721 { 00:09:06.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.721 "dma_device_type": 2 00:09:06.721 } 00:09:06.721 ], 00:09:06.721 "driver_specific": {} 00:09:06.721 } 00:09:06.721 ] 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.721 "name": "Existed_Raid", 00:09:06.721 "uuid": "cad36ae1-4ec9-47ef-bd13-9298b4fa662d", 00:09:06.721 "strip_size_kb": 64, 00:09:06.721 "state": "online", 00:09:06.721 "raid_level": "raid0", 00:09:06.721 "superblock": false, 00:09:06.721 "num_base_bdevs": 3, 00:09:06.721 "num_base_bdevs_discovered": 3, 00:09:06.721 "num_base_bdevs_operational": 3, 00:09:06.721 "base_bdevs_list": [ 00:09:06.721 { 00:09:06.721 "name": "BaseBdev1", 00:09:06.721 "uuid": "173ccc35-3db7-42aa-998a-d90d9973d6d0", 00:09:06.721 "is_configured": true, 00:09:06.721 "data_offset": 0, 00:09:06.721 "data_size": 65536 00:09:06.721 }, 00:09:06.721 { 00:09:06.721 "name": "BaseBdev2", 00:09:06.721 "uuid": "4a468ee5-67d7-476b-82e2-4a2849af3e89", 00:09:06.721 "is_configured": true, 00:09:06.721 "data_offset": 0, 00:09:06.721 "data_size": 65536 00:09:06.721 }, 00:09:06.721 { 00:09:06.721 "name": "BaseBdev3", 00:09:06.721 "uuid": "e9f3882c-7389-40dc-bee7-ecc651b77e8b", 00:09:06.721 "is_configured": true, 00:09:06.721 "data_offset": 0, 00:09:06.721 "data_size": 65536 00:09:06.721 } 00:09:06.721 ] 00:09:06.721 }' 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.721 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.291 [2024-11-17 01:29:15.596274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.291 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.291 "name": "Existed_Raid", 00:09:07.291 "aliases": [ 00:09:07.291 "cad36ae1-4ec9-47ef-bd13-9298b4fa662d" 00:09:07.291 ], 00:09:07.291 "product_name": "Raid Volume", 00:09:07.291 "block_size": 512, 00:09:07.291 "num_blocks": 196608, 00:09:07.292 "uuid": "cad36ae1-4ec9-47ef-bd13-9298b4fa662d", 00:09:07.292 "assigned_rate_limits": { 00:09:07.292 "rw_ios_per_sec": 0, 00:09:07.292 "rw_mbytes_per_sec": 0, 00:09:07.292 "r_mbytes_per_sec": 0, 00:09:07.292 "w_mbytes_per_sec": 0 00:09:07.292 }, 00:09:07.292 "claimed": false, 00:09:07.292 "zoned": false, 00:09:07.292 "supported_io_types": { 00:09:07.292 "read": true, 00:09:07.292 "write": true, 00:09:07.292 "unmap": true, 00:09:07.292 "flush": true, 00:09:07.292 "reset": true, 00:09:07.292 "nvme_admin": false, 00:09:07.292 "nvme_io": false, 00:09:07.292 "nvme_io_md": false, 00:09:07.292 "write_zeroes": true, 00:09:07.292 "zcopy": false, 00:09:07.292 "get_zone_info": false, 00:09:07.292 "zone_management": false, 00:09:07.292 "zone_append": false, 00:09:07.292 "compare": false, 00:09:07.292 "compare_and_write": false, 00:09:07.292 "abort": false, 00:09:07.292 "seek_hole": false, 00:09:07.292 "seek_data": false, 00:09:07.292 "copy": false, 00:09:07.292 "nvme_iov_md": false 00:09:07.292 }, 00:09:07.292 "memory_domains": [ 00:09:07.292 { 00:09:07.292 "dma_device_id": "system", 00:09:07.292 "dma_device_type": 1 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.292 "dma_device_type": 2 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "dma_device_id": "system", 00:09:07.292 "dma_device_type": 1 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.292 "dma_device_type": 2 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "dma_device_id": "system", 00:09:07.292 "dma_device_type": 1 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.292 "dma_device_type": 2 00:09:07.292 } 00:09:07.292 ], 00:09:07.292 "driver_specific": { 00:09:07.292 "raid": { 00:09:07.292 "uuid": "cad36ae1-4ec9-47ef-bd13-9298b4fa662d", 00:09:07.292 "strip_size_kb": 64, 00:09:07.292 "state": "online", 00:09:07.292 "raid_level": "raid0", 00:09:07.292 "superblock": false, 00:09:07.292 "num_base_bdevs": 3, 00:09:07.292 "num_base_bdevs_discovered": 3, 00:09:07.292 "num_base_bdevs_operational": 3, 00:09:07.292 "base_bdevs_list": [ 00:09:07.292 { 00:09:07.292 "name": "BaseBdev1", 00:09:07.292 "uuid": "173ccc35-3db7-42aa-998a-d90d9973d6d0", 00:09:07.292 "is_configured": true, 00:09:07.292 "data_offset": 0, 00:09:07.292 "data_size": 65536 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "name": "BaseBdev2", 00:09:07.292 "uuid": "4a468ee5-67d7-476b-82e2-4a2849af3e89", 00:09:07.292 "is_configured": true, 00:09:07.292 "data_offset": 0, 00:09:07.292 "data_size": 65536 00:09:07.292 }, 00:09:07.292 { 00:09:07.292 "name": "BaseBdev3", 00:09:07.292 "uuid": "e9f3882c-7389-40dc-bee7-ecc651b77e8b", 00:09:07.292 "is_configured": true, 00:09:07.292 "data_offset": 0, 00:09:07.292 "data_size": 65536 00:09:07.292 } 00:09:07.292 ] 00:09:07.292 } 00:09:07.292 } 00:09:07.292 }' 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:07.292 BaseBdev2 00:09:07.292 BaseBdev3' 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.292 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.552 [2024-11-17 01:29:15.847621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.552 [2024-11-17 01:29:15.847653] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.552 [2024-11-17 01:29:15.847705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:07.552 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.553 "name": "Existed_Raid", 00:09:07.553 "uuid": "cad36ae1-4ec9-47ef-bd13-9298b4fa662d", 00:09:07.553 "strip_size_kb": 64, 00:09:07.553 "state": "offline", 00:09:07.553 "raid_level": "raid0", 00:09:07.553 "superblock": false, 00:09:07.553 "num_base_bdevs": 3, 00:09:07.553 "num_base_bdevs_discovered": 2, 00:09:07.553 "num_base_bdevs_operational": 2, 00:09:07.553 "base_bdevs_list": [ 00:09:07.553 { 00:09:07.553 "name": null, 00:09:07.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.553 "is_configured": false, 00:09:07.553 "data_offset": 0, 00:09:07.553 "data_size": 65536 00:09:07.553 }, 00:09:07.553 { 00:09:07.553 "name": "BaseBdev2", 00:09:07.553 "uuid": "4a468ee5-67d7-476b-82e2-4a2849af3e89", 00:09:07.553 "is_configured": true, 00:09:07.553 "data_offset": 0, 00:09:07.553 "data_size": 65536 00:09:07.553 }, 00:09:07.553 { 00:09:07.553 "name": "BaseBdev3", 00:09:07.553 "uuid": "e9f3882c-7389-40dc-bee7-ecc651b77e8b", 00:09:07.553 "is_configured": true, 00:09:07.553 "data_offset": 0, 00:09:07.553 "data_size": 65536 00:09:07.553 } 00:09:07.553 ] 00:09:07.553 }' 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.553 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.123 [2024-11-17 01:29:16.453867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.123 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.383 [2024-11-17 01:29:16.587987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.383 [2024-11-17 01:29:16.588041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.383 BaseBdev2 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.383 [ 00:09:08.383 { 00:09:08.383 "name": "BaseBdev2", 00:09:08.383 "aliases": [ 00:09:08.383 "d042f793-dff2-4b58-a395-28ef4ea2dcc9" 00:09:08.383 ], 00:09:08.383 "product_name": "Malloc disk", 00:09:08.383 "block_size": 512, 00:09:08.383 "num_blocks": 65536, 00:09:08.383 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:08.383 "assigned_rate_limits": { 00:09:08.383 "rw_ios_per_sec": 0, 00:09:08.383 "rw_mbytes_per_sec": 0, 00:09:08.383 "r_mbytes_per_sec": 0, 00:09:08.383 "w_mbytes_per_sec": 0 00:09:08.383 }, 00:09:08.383 "claimed": false, 00:09:08.383 "zoned": false, 00:09:08.383 "supported_io_types": { 00:09:08.383 "read": true, 00:09:08.383 "write": true, 00:09:08.383 "unmap": true, 00:09:08.383 "flush": true, 00:09:08.383 "reset": true, 00:09:08.383 "nvme_admin": false, 00:09:08.383 "nvme_io": false, 00:09:08.383 "nvme_io_md": false, 00:09:08.383 "write_zeroes": true, 00:09:08.383 "zcopy": true, 00:09:08.383 "get_zone_info": false, 00:09:08.383 "zone_management": false, 00:09:08.383 "zone_append": false, 00:09:08.383 "compare": false, 00:09:08.383 "compare_and_write": false, 00:09:08.383 "abort": true, 00:09:08.383 "seek_hole": false, 00:09:08.383 "seek_data": false, 00:09:08.383 "copy": true, 00:09:08.383 "nvme_iov_md": false 00:09:08.383 }, 00:09:08.383 "memory_domains": [ 00:09:08.383 { 00:09:08.383 "dma_device_id": "system", 00:09:08.383 "dma_device_type": 1 00:09:08.383 }, 00:09:08.383 { 00:09:08.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.383 "dma_device_type": 2 00:09:08.383 } 00:09:08.383 ], 00:09:08.383 "driver_specific": {} 00:09:08.383 } 00:09:08.383 ] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.383 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.644 BaseBdev3 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.644 [ 00:09:08.644 { 00:09:08.644 "name": "BaseBdev3", 00:09:08.644 "aliases": [ 00:09:08.644 "f2b79c6b-88bc-49c3-bd3f-de02c6243157" 00:09:08.644 ], 00:09:08.644 "product_name": "Malloc disk", 00:09:08.644 "block_size": 512, 00:09:08.644 "num_blocks": 65536, 00:09:08.644 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:08.644 "assigned_rate_limits": { 00:09:08.644 "rw_ios_per_sec": 0, 00:09:08.644 "rw_mbytes_per_sec": 0, 00:09:08.644 "r_mbytes_per_sec": 0, 00:09:08.644 "w_mbytes_per_sec": 0 00:09:08.644 }, 00:09:08.644 "claimed": false, 00:09:08.644 "zoned": false, 00:09:08.644 "supported_io_types": { 00:09:08.644 "read": true, 00:09:08.644 "write": true, 00:09:08.644 "unmap": true, 00:09:08.644 "flush": true, 00:09:08.644 "reset": true, 00:09:08.644 "nvme_admin": false, 00:09:08.644 "nvme_io": false, 00:09:08.644 "nvme_io_md": false, 00:09:08.644 "write_zeroes": true, 00:09:08.644 "zcopy": true, 00:09:08.644 "get_zone_info": false, 00:09:08.644 "zone_management": false, 00:09:08.644 "zone_append": false, 00:09:08.644 "compare": false, 00:09:08.644 "compare_and_write": false, 00:09:08.644 "abort": true, 00:09:08.644 "seek_hole": false, 00:09:08.644 "seek_data": false, 00:09:08.644 "copy": true, 00:09:08.644 "nvme_iov_md": false 00:09:08.644 }, 00:09:08.644 "memory_domains": [ 00:09:08.644 { 00:09:08.644 "dma_device_id": "system", 00:09:08.644 "dma_device_type": 1 00:09:08.644 }, 00:09:08.644 { 00:09:08.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.644 "dma_device_type": 2 00:09:08.644 } 00:09:08.644 ], 00:09:08.644 "driver_specific": {} 00:09:08.644 } 00:09:08.644 ] 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.644 [2024-11-17 01:29:16.897633] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.644 [2024-11-17 01:29:16.897726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.644 [2024-11-17 01:29:16.897776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.644 [2024-11-17 01:29:16.899529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.644 "name": "Existed_Raid", 00:09:08.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.644 "strip_size_kb": 64, 00:09:08.644 "state": "configuring", 00:09:08.644 "raid_level": "raid0", 00:09:08.644 "superblock": false, 00:09:08.644 "num_base_bdevs": 3, 00:09:08.644 "num_base_bdevs_discovered": 2, 00:09:08.644 "num_base_bdevs_operational": 3, 00:09:08.644 "base_bdevs_list": [ 00:09:08.644 { 00:09:08.644 "name": "BaseBdev1", 00:09:08.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.644 "is_configured": false, 00:09:08.644 "data_offset": 0, 00:09:08.644 "data_size": 0 00:09:08.644 }, 00:09:08.644 { 00:09:08.644 "name": "BaseBdev2", 00:09:08.644 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:08.644 "is_configured": true, 00:09:08.644 "data_offset": 0, 00:09:08.644 "data_size": 65536 00:09:08.644 }, 00:09:08.644 { 00:09:08.644 "name": "BaseBdev3", 00:09:08.644 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:08.644 "is_configured": true, 00:09:08.644 "data_offset": 0, 00:09:08.644 "data_size": 65536 00:09:08.644 } 00:09:08.644 ] 00:09:08.644 }' 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.644 01:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.904 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:08.904 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.904 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.164 [2024-11-17 01:29:17.364854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.164 "name": "Existed_Raid", 00:09:09.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.164 "strip_size_kb": 64, 00:09:09.164 "state": "configuring", 00:09:09.164 "raid_level": "raid0", 00:09:09.164 "superblock": false, 00:09:09.164 "num_base_bdevs": 3, 00:09:09.164 "num_base_bdevs_discovered": 1, 00:09:09.164 "num_base_bdevs_operational": 3, 00:09:09.164 "base_bdevs_list": [ 00:09:09.164 { 00:09:09.164 "name": "BaseBdev1", 00:09:09.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.164 "is_configured": false, 00:09:09.164 "data_offset": 0, 00:09:09.164 "data_size": 0 00:09:09.164 }, 00:09:09.164 { 00:09:09.164 "name": null, 00:09:09.164 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:09.164 "is_configured": false, 00:09:09.164 "data_offset": 0, 00:09:09.164 "data_size": 65536 00:09:09.164 }, 00:09:09.164 { 00:09:09.164 "name": "BaseBdev3", 00:09:09.164 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:09.164 "is_configured": true, 00:09:09.164 "data_offset": 0, 00:09:09.164 "data_size": 65536 00:09:09.164 } 00:09:09.164 ] 00:09:09.164 }' 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.164 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 [2024-11-17 01:29:17.856017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.423 BaseBdev1 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.423 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 [ 00:09:09.423 { 00:09:09.423 "name": "BaseBdev1", 00:09:09.682 "aliases": [ 00:09:09.682 "9978d1d0-c535-4adc-9d1b-2bd872cc1431" 00:09:09.682 ], 00:09:09.682 "product_name": "Malloc disk", 00:09:09.682 "block_size": 512, 00:09:09.682 "num_blocks": 65536, 00:09:09.682 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:09.682 "assigned_rate_limits": { 00:09:09.682 "rw_ios_per_sec": 0, 00:09:09.682 "rw_mbytes_per_sec": 0, 00:09:09.682 "r_mbytes_per_sec": 0, 00:09:09.682 "w_mbytes_per_sec": 0 00:09:09.682 }, 00:09:09.682 "claimed": true, 00:09:09.682 "claim_type": "exclusive_write", 00:09:09.682 "zoned": false, 00:09:09.682 "supported_io_types": { 00:09:09.682 "read": true, 00:09:09.682 "write": true, 00:09:09.682 "unmap": true, 00:09:09.682 "flush": true, 00:09:09.682 "reset": true, 00:09:09.682 "nvme_admin": false, 00:09:09.682 "nvme_io": false, 00:09:09.682 "nvme_io_md": false, 00:09:09.682 "write_zeroes": true, 00:09:09.682 "zcopy": true, 00:09:09.682 "get_zone_info": false, 00:09:09.682 "zone_management": false, 00:09:09.682 "zone_append": false, 00:09:09.682 "compare": false, 00:09:09.682 "compare_and_write": false, 00:09:09.682 "abort": true, 00:09:09.682 "seek_hole": false, 00:09:09.682 "seek_data": false, 00:09:09.682 "copy": true, 00:09:09.682 "nvme_iov_md": false 00:09:09.682 }, 00:09:09.682 "memory_domains": [ 00:09:09.682 { 00:09:09.682 "dma_device_id": "system", 00:09:09.682 "dma_device_type": 1 00:09:09.682 }, 00:09:09.682 { 00:09:09.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.683 "dma_device_type": 2 00:09:09.683 } 00:09:09.683 ], 00:09:09.683 "driver_specific": {} 00:09:09.683 } 00:09:09.683 ] 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.683 "name": "Existed_Raid", 00:09:09.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.683 "strip_size_kb": 64, 00:09:09.683 "state": "configuring", 00:09:09.683 "raid_level": "raid0", 00:09:09.683 "superblock": false, 00:09:09.683 "num_base_bdevs": 3, 00:09:09.683 "num_base_bdevs_discovered": 2, 00:09:09.683 "num_base_bdevs_operational": 3, 00:09:09.683 "base_bdevs_list": [ 00:09:09.683 { 00:09:09.683 "name": "BaseBdev1", 00:09:09.683 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:09.683 "is_configured": true, 00:09:09.683 "data_offset": 0, 00:09:09.683 "data_size": 65536 00:09:09.683 }, 00:09:09.683 { 00:09:09.683 "name": null, 00:09:09.683 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:09.683 "is_configured": false, 00:09:09.683 "data_offset": 0, 00:09:09.683 "data_size": 65536 00:09:09.683 }, 00:09:09.683 { 00:09:09.683 "name": "BaseBdev3", 00:09:09.683 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:09.683 "is_configured": true, 00:09:09.683 "data_offset": 0, 00:09:09.683 "data_size": 65536 00:09:09.683 } 00:09:09.683 ] 00:09:09.683 }' 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.683 01:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.942 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.942 [2024-11-17 01:29:18.347224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.943 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.202 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.202 "name": "Existed_Raid", 00:09:10.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.202 "strip_size_kb": 64, 00:09:10.202 "state": "configuring", 00:09:10.202 "raid_level": "raid0", 00:09:10.202 "superblock": false, 00:09:10.202 "num_base_bdevs": 3, 00:09:10.202 "num_base_bdevs_discovered": 1, 00:09:10.202 "num_base_bdevs_operational": 3, 00:09:10.202 "base_bdevs_list": [ 00:09:10.202 { 00:09:10.202 "name": "BaseBdev1", 00:09:10.202 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:10.202 "is_configured": true, 00:09:10.202 "data_offset": 0, 00:09:10.202 "data_size": 65536 00:09:10.202 }, 00:09:10.202 { 00:09:10.202 "name": null, 00:09:10.202 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:10.202 "is_configured": false, 00:09:10.202 "data_offset": 0, 00:09:10.202 "data_size": 65536 00:09:10.202 }, 00:09:10.202 { 00:09:10.202 "name": null, 00:09:10.202 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:10.202 "is_configured": false, 00:09:10.202 "data_offset": 0, 00:09:10.202 "data_size": 65536 00:09:10.202 } 00:09:10.202 ] 00:09:10.202 }' 00:09:10.202 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.202 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.462 [2024-11-17 01:29:18.862344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.462 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.463 "name": "Existed_Raid", 00:09:10.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.463 "strip_size_kb": 64, 00:09:10.463 "state": "configuring", 00:09:10.463 "raid_level": "raid0", 00:09:10.463 "superblock": false, 00:09:10.463 "num_base_bdevs": 3, 00:09:10.463 "num_base_bdevs_discovered": 2, 00:09:10.463 "num_base_bdevs_operational": 3, 00:09:10.463 "base_bdevs_list": [ 00:09:10.463 { 00:09:10.463 "name": "BaseBdev1", 00:09:10.463 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:10.463 "is_configured": true, 00:09:10.463 "data_offset": 0, 00:09:10.463 "data_size": 65536 00:09:10.463 }, 00:09:10.463 { 00:09:10.463 "name": null, 00:09:10.463 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:10.463 "is_configured": false, 00:09:10.463 "data_offset": 0, 00:09:10.463 "data_size": 65536 00:09:10.463 }, 00:09:10.463 { 00:09:10.463 "name": "BaseBdev3", 00:09:10.463 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:10.463 "is_configured": true, 00:09:10.463 "data_offset": 0, 00:09:10.463 "data_size": 65536 00:09:10.463 } 00:09:10.463 ] 00:09:10.463 }' 00:09:10.463 01:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.722 01:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.981 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.981 [2024-11-17 01:29:19.373482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.239 "name": "Existed_Raid", 00:09:11.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.239 "strip_size_kb": 64, 00:09:11.239 "state": "configuring", 00:09:11.239 "raid_level": "raid0", 00:09:11.239 "superblock": false, 00:09:11.239 "num_base_bdevs": 3, 00:09:11.239 "num_base_bdevs_discovered": 1, 00:09:11.239 "num_base_bdevs_operational": 3, 00:09:11.239 "base_bdevs_list": [ 00:09:11.239 { 00:09:11.239 "name": null, 00:09:11.239 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:11.239 "is_configured": false, 00:09:11.239 "data_offset": 0, 00:09:11.239 "data_size": 65536 00:09:11.239 }, 00:09:11.239 { 00:09:11.239 "name": null, 00:09:11.239 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:11.239 "is_configured": false, 00:09:11.239 "data_offset": 0, 00:09:11.239 "data_size": 65536 00:09:11.239 }, 00:09:11.239 { 00:09:11.239 "name": "BaseBdev3", 00:09:11.239 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:11.239 "is_configured": true, 00:09:11.239 "data_offset": 0, 00:09:11.239 "data_size": 65536 00:09:11.239 } 00:09:11.239 ] 00:09:11.239 }' 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.239 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.499 [2024-11-17 01:29:19.931679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.499 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.759 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.759 "name": "Existed_Raid", 00:09:11.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.759 "strip_size_kb": 64, 00:09:11.759 "state": "configuring", 00:09:11.759 "raid_level": "raid0", 00:09:11.759 "superblock": false, 00:09:11.759 "num_base_bdevs": 3, 00:09:11.759 "num_base_bdevs_discovered": 2, 00:09:11.759 "num_base_bdevs_operational": 3, 00:09:11.759 "base_bdevs_list": [ 00:09:11.759 { 00:09:11.759 "name": null, 00:09:11.759 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:11.759 "is_configured": false, 00:09:11.759 "data_offset": 0, 00:09:11.759 "data_size": 65536 00:09:11.759 }, 00:09:11.759 { 00:09:11.759 "name": "BaseBdev2", 00:09:11.759 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:11.759 "is_configured": true, 00:09:11.759 "data_offset": 0, 00:09:11.759 "data_size": 65536 00:09:11.759 }, 00:09:11.759 { 00:09:11.759 "name": "BaseBdev3", 00:09:11.759 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:11.759 "is_configured": true, 00:09:11.759 "data_offset": 0, 00:09:11.759 "data_size": 65536 00:09:11.759 } 00:09:11.759 ] 00:09:11.759 }' 00:09:11.759 01:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.759 01:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.019 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.019 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.019 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.019 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.019 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.019 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9978d1d0-c535-4adc-9d1b-2bd872cc1431 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 [2024-11-17 01:29:20.558836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.279 [2024-11-17 01:29:20.558974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.279 [2024-11-17 01:29:20.559001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:12.279 [2024-11-17 01:29:20.559283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.279 [2024-11-17 01:29:20.559468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.279 [2024-11-17 01:29:20.559508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:12.279 [2024-11-17 01:29:20.559811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.279 NewBaseBdev 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 [ 00:09:12.279 { 00:09:12.279 "name": "NewBaseBdev", 00:09:12.279 "aliases": [ 00:09:12.279 "9978d1d0-c535-4adc-9d1b-2bd872cc1431" 00:09:12.279 ], 00:09:12.279 "product_name": "Malloc disk", 00:09:12.279 "block_size": 512, 00:09:12.279 "num_blocks": 65536, 00:09:12.279 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:12.279 "assigned_rate_limits": { 00:09:12.279 "rw_ios_per_sec": 0, 00:09:12.279 "rw_mbytes_per_sec": 0, 00:09:12.279 "r_mbytes_per_sec": 0, 00:09:12.279 "w_mbytes_per_sec": 0 00:09:12.279 }, 00:09:12.279 "claimed": true, 00:09:12.279 "claim_type": "exclusive_write", 00:09:12.279 "zoned": false, 00:09:12.279 "supported_io_types": { 00:09:12.279 "read": true, 00:09:12.279 "write": true, 00:09:12.279 "unmap": true, 00:09:12.279 "flush": true, 00:09:12.279 "reset": true, 00:09:12.279 "nvme_admin": false, 00:09:12.279 "nvme_io": false, 00:09:12.279 "nvme_io_md": false, 00:09:12.279 "write_zeroes": true, 00:09:12.279 "zcopy": true, 00:09:12.279 "get_zone_info": false, 00:09:12.279 "zone_management": false, 00:09:12.279 "zone_append": false, 00:09:12.279 "compare": false, 00:09:12.279 "compare_and_write": false, 00:09:12.279 "abort": true, 00:09:12.279 "seek_hole": false, 00:09:12.279 "seek_data": false, 00:09:12.279 "copy": true, 00:09:12.279 "nvme_iov_md": false 00:09:12.279 }, 00:09:12.279 "memory_domains": [ 00:09:12.279 { 00:09:12.279 "dma_device_id": "system", 00:09:12.279 "dma_device_type": 1 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.279 "dma_device_type": 2 00:09:12.279 } 00:09:12.279 ], 00:09:12.279 "driver_specific": {} 00:09:12.279 } 00:09:12.279 ] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.279 "name": "Existed_Raid", 00:09:12.279 "uuid": "7f7234e6-d550-4296-b3a6-c1c62af1e1e1", 00:09:12.279 "strip_size_kb": 64, 00:09:12.279 "state": "online", 00:09:12.279 "raid_level": "raid0", 00:09:12.279 "superblock": false, 00:09:12.279 "num_base_bdevs": 3, 00:09:12.279 "num_base_bdevs_discovered": 3, 00:09:12.279 "num_base_bdevs_operational": 3, 00:09:12.279 "base_bdevs_list": [ 00:09:12.279 { 00:09:12.279 "name": "NewBaseBdev", 00:09:12.279 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:12.279 "is_configured": true, 00:09:12.279 "data_offset": 0, 00:09:12.279 "data_size": 65536 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "name": "BaseBdev2", 00:09:12.279 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:12.279 "is_configured": true, 00:09:12.279 "data_offset": 0, 00:09:12.279 "data_size": 65536 00:09:12.279 }, 00:09:12.279 { 00:09:12.279 "name": "BaseBdev3", 00:09:12.279 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:12.279 "is_configured": true, 00:09:12.279 "data_offset": 0, 00:09:12.279 "data_size": 65536 00:09:12.279 } 00:09:12.279 ] 00:09:12.279 }' 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.279 01:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.850 [2024-11-17 01:29:21.038286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.850 "name": "Existed_Raid", 00:09:12.850 "aliases": [ 00:09:12.850 "7f7234e6-d550-4296-b3a6-c1c62af1e1e1" 00:09:12.850 ], 00:09:12.850 "product_name": "Raid Volume", 00:09:12.850 "block_size": 512, 00:09:12.850 "num_blocks": 196608, 00:09:12.850 "uuid": "7f7234e6-d550-4296-b3a6-c1c62af1e1e1", 00:09:12.850 "assigned_rate_limits": { 00:09:12.850 "rw_ios_per_sec": 0, 00:09:12.850 "rw_mbytes_per_sec": 0, 00:09:12.850 "r_mbytes_per_sec": 0, 00:09:12.850 "w_mbytes_per_sec": 0 00:09:12.850 }, 00:09:12.850 "claimed": false, 00:09:12.850 "zoned": false, 00:09:12.850 "supported_io_types": { 00:09:12.850 "read": true, 00:09:12.850 "write": true, 00:09:12.850 "unmap": true, 00:09:12.850 "flush": true, 00:09:12.850 "reset": true, 00:09:12.850 "nvme_admin": false, 00:09:12.850 "nvme_io": false, 00:09:12.850 "nvme_io_md": false, 00:09:12.850 "write_zeroes": true, 00:09:12.850 "zcopy": false, 00:09:12.850 "get_zone_info": false, 00:09:12.850 "zone_management": false, 00:09:12.850 "zone_append": false, 00:09:12.850 "compare": false, 00:09:12.850 "compare_and_write": false, 00:09:12.850 "abort": false, 00:09:12.850 "seek_hole": false, 00:09:12.850 "seek_data": false, 00:09:12.850 "copy": false, 00:09:12.850 "nvme_iov_md": false 00:09:12.850 }, 00:09:12.850 "memory_domains": [ 00:09:12.850 { 00:09:12.850 "dma_device_id": "system", 00:09:12.850 "dma_device_type": 1 00:09:12.850 }, 00:09:12.850 { 00:09:12.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.850 "dma_device_type": 2 00:09:12.850 }, 00:09:12.850 { 00:09:12.850 "dma_device_id": "system", 00:09:12.850 "dma_device_type": 1 00:09:12.850 }, 00:09:12.850 { 00:09:12.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.850 "dma_device_type": 2 00:09:12.850 }, 00:09:12.850 { 00:09:12.850 "dma_device_id": "system", 00:09:12.850 "dma_device_type": 1 00:09:12.850 }, 00:09:12.850 { 00:09:12.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.850 "dma_device_type": 2 00:09:12.850 } 00:09:12.850 ], 00:09:12.850 "driver_specific": { 00:09:12.850 "raid": { 00:09:12.850 "uuid": "7f7234e6-d550-4296-b3a6-c1c62af1e1e1", 00:09:12.850 "strip_size_kb": 64, 00:09:12.850 "state": "online", 00:09:12.850 "raid_level": "raid0", 00:09:12.850 "superblock": false, 00:09:12.850 "num_base_bdevs": 3, 00:09:12.850 "num_base_bdevs_discovered": 3, 00:09:12.850 "num_base_bdevs_operational": 3, 00:09:12.850 "base_bdevs_list": [ 00:09:12.850 { 00:09:12.850 "name": "NewBaseBdev", 00:09:12.850 "uuid": "9978d1d0-c535-4adc-9d1b-2bd872cc1431", 00:09:12.850 "is_configured": true, 00:09:12.850 "data_offset": 0, 00:09:12.850 "data_size": 65536 00:09:12.850 }, 00:09:12.850 { 00:09:12.850 "name": "BaseBdev2", 00:09:12.850 "uuid": "d042f793-dff2-4b58-a395-28ef4ea2dcc9", 00:09:12.850 "is_configured": true, 00:09:12.850 "data_offset": 0, 00:09:12.850 "data_size": 65536 00:09:12.850 }, 00:09:12.850 { 00:09:12.850 "name": "BaseBdev3", 00:09:12.850 "uuid": "f2b79c6b-88bc-49c3-bd3f-de02c6243157", 00:09:12.850 "is_configured": true, 00:09:12.850 "data_offset": 0, 00:09:12.850 "data_size": 65536 00:09:12.850 } 00:09:12.850 ] 00:09:12.850 } 00:09:12.850 } 00:09:12.850 }' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.850 BaseBdev2 00:09:12.850 BaseBdev3' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.850 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.851 [2024-11-17 01:29:21.297530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.851 [2024-11-17 01:29:21.297555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.851 [2024-11-17 01:29:21.297621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.851 [2024-11-17 01:29:21.297670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.851 [2024-11-17 01:29:21.297682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63654 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63654 ']' 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63654 00:09:12.851 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:13.110 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.110 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63654 00:09:13.110 killing process with pid 63654 00:09:13.110 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.110 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.110 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63654' 00:09:13.110 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63654 00:09:13.110 [2024-11-17 01:29:21.349389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.110 01:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63654 00:09:13.370 [2024-11-17 01:29:21.639553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:14.309 00:09:14.309 real 0m10.431s 00:09:14.309 user 0m16.669s 00:09:14.309 sys 0m1.828s 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.309 ************************************ 00:09:14.309 END TEST raid_state_function_test 00:09:14.309 ************************************ 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.309 01:29:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:14.309 01:29:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.309 01:29:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.309 01:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.309 ************************************ 00:09:14.309 START TEST raid_state_function_test_sb 00:09:14.309 ************************************ 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:14.309 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64273 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64273' 00:09:14.568 Process raid pid: 64273 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64273 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64273 ']' 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.568 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.568 [2024-11-17 01:29:22.858233] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:14.568 [2024-11-17 01:29:22.858345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.827 [2024-11-17 01:29:23.029363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.827 [2024-11-17 01:29:23.134224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.086 [2024-11-17 01:29:23.324723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.086 [2024-11-17 01:29:23.324775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 [2024-11-17 01:29:23.689391] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.346 [2024-11-17 01:29:23.689516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.346 [2024-11-17 01:29:23.689531] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.346 [2024-11-17 01:29:23.689541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.346 [2024-11-17 01:29:23.689547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.346 [2024-11-17 01:29:23.689555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.346 "name": "Existed_Raid", 00:09:15.346 "uuid": "56b88d93-bf1b-4214-94df-0b24e15d5b14", 00:09:15.346 "strip_size_kb": 64, 00:09:15.346 "state": "configuring", 00:09:15.346 "raid_level": "raid0", 00:09:15.346 "superblock": true, 00:09:15.346 "num_base_bdevs": 3, 00:09:15.346 "num_base_bdevs_discovered": 0, 00:09:15.346 "num_base_bdevs_operational": 3, 00:09:15.346 "base_bdevs_list": [ 00:09:15.346 { 00:09:15.346 "name": "BaseBdev1", 00:09:15.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.346 "is_configured": false, 00:09:15.346 "data_offset": 0, 00:09:15.346 "data_size": 0 00:09:15.346 }, 00:09:15.346 { 00:09:15.346 "name": "BaseBdev2", 00:09:15.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.346 "is_configured": false, 00:09:15.346 "data_offset": 0, 00:09:15.346 "data_size": 0 00:09:15.346 }, 00:09:15.346 { 00:09:15.346 "name": "BaseBdev3", 00:09:15.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.346 "is_configured": false, 00:09:15.346 "data_offset": 0, 00:09:15.346 "data_size": 0 00:09:15.346 } 00:09:15.346 ] 00:09:15.346 }' 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.346 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 [2024-11-17 01:29:24.080664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.917 [2024-11-17 01:29:24.080773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 [2024-11-17 01:29:24.088650] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.917 [2024-11-17 01:29:24.088747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.917 [2024-11-17 01:29:24.088791] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.917 [2024-11-17 01:29:24.088832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.917 [2024-11-17 01:29:24.088851] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.917 [2024-11-17 01:29:24.088873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 [2024-11-17 01:29:24.129658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.917 BaseBdev1 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 [ 00:09:15.917 { 00:09:15.917 "name": "BaseBdev1", 00:09:15.917 "aliases": [ 00:09:15.917 "4c89c8b2-fba7-46b2-b777-b4fd23500d11" 00:09:15.917 ], 00:09:15.917 "product_name": "Malloc disk", 00:09:15.917 "block_size": 512, 00:09:15.917 "num_blocks": 65536, 00:09:15.917 "uuid": "4c89c8b2-fba7-46b2-b777-b4fd23500d11", 00:09:15.917 "assigned_rate_limits": { 00:09:15.917 "rw_ios_per_sec": 0, 00:09:15.917 "rw_mbytes_per_sec": 0, 00:09:15.917 "r_mbytes_per_sec": 0, 00:09:15.917 "w_mbytes_per_sec": 0 00:09:15.917 }, 00:09:15.917 "claimed": true, 00:09:15.917 "claim_type": "exclusive_write", 00:09:15.917 "zoned": false, 00:09:15.917 "supported_io_types": { 00:09:15.917 "read": true, 00:09:15.917 "write": true, 00:09:15.917 "unmap": true, 00:09:15.917 "flush": true, 00:09:15.917 "reset": true, 00:09:15.917 "nvme_admin": false, 00:09:15.917 "nvme_io": false, 00:09:15.917 "nvme_io_md": false, 00:09:15.917 "write_zeroes": true, 00:09:15.917 "zcopy": true, 00:09:15.917 "get_zone_info": false, 00:09:15.917 "zone_management": false, 00:09:15.917 "zone_append": false, 00:09:15.917 "compare": false, 00:09:15.917 "compare_and_write": false, 00:09:15.917 "abort": true, 00:09:15.917 "seek_hole": false, 00:09:15.917 "seek_data": false, 00:09:15.917 "copy": true, 00:09:15.917 "nvme_iov_md": false 00:09:15.917 }, 00:09:15.917 "memory_domains": [ 00:09:15.917 { 00:09:15.917 "dma_device_id": "system", 00:09:15.917 "dma_device_type": 1 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.917 "dma_device_type": 2 00:09:15.917 } 00:09:15.917 ], 00:09:15.917 "driver_specific": {} 00:09:15.917 } 00:09:15.917 ] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.917 "name": "Existed_Raid", 00:09:15.917 "uuid": "486aadfb-b7fc-48df-bf0f-027b064616b7", 00:09:15.917 "strip_size_kb": 64, 00:09:15.917 "state": "configuring", 00:09:15.917 "raid_level": "raid0", 00:09:15.917 "superblock": true, 00:09:15.917 "num_base_bdevs": 3, 00:09:15.917 "num_base_bdevs_discovered": 1, 00:09:15.917 "num_base_bdevs_operational": 3, 00:09:15.917 "base_bdevs_list": [ 00:09:15.917 { 00:09:15.917 "name": "BaseBdev1", 00:09:15.917 "uuid": "4c89c8b2-fba7-46b2-b777-b4fd23500d11", 00:09:15.917 "is_configured": true, 00:09:15.917 "data_offset": 2048, 00:09:15.917 "data_size": 63488 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "name": "BaseBdev2", 00:09:15.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.917 "is_configured": false, 00:09:15.917 "data_offset": 0, 00:09:15.917 "data_size": 0 00:09:15.917 }, 00:09:15.917 { 00:09:15.917 "name": "BaseBdev3", 00:09:15.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.917 "is_configured": false, 00:09:15.917 "data_offset": 0, 00:09:15.917 "data_size": 0 00:09:15.917 } 00:09:15.917 ] 00:09:15.917 }' 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.917 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.177 [2024-11-17 01:29:24.604893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.177 [2024-11-17 01:29:24.604949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.177 [2024-11-17 01:29:24.616943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.177 [2024-11-17 01:29:24.618656] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.177 [2024-11-17 01:29:24.618694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.177 [2024-11-17 01:29:24.618703] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.177 [2024-11-17 01:29:24.618712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.177 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.436 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.437 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.437 "name": "Existed_Raid", 00:09:16.437 "uuid": "3b3cf9bc-5ec8-48cf-9ad9-df359a0dc5c8", 00:09:16.437 "strip_size_kb": 64, 00:09:16.437 "state": "configuring", 00:09:16.437 "raid_level": "raid0", 00:09:16.437 "superblock": true, 00:09:16.437 "num_base_bdevs": 3, 00:09:16.437 "num_base_bdevs_discovered": 1, 00:09:16.437 "num_base_bdevs_operational": 3, 00:09:16.437 "base_bdevs_list": [ 00:09:16.437 { 00:09:16.437 "name": "BaseBdev1", 00:09:16.437 "uuid": "4c89c8b2-fba7-46b2-b777-b4fd23500d11", 00:09:16.437 "is_configured": true, 00:09:16.437 "data_offset": 2048, 00:09:16.437 "data_size": 63488 00:09:16.437 }, 00:09:16.437 { 00:09:16.437 "name": "BaseBdev2", 00:09:16.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.437 "is_configured": false, 00:09:16.437 "data_offset": 0, 00:09:16.437 "data_size": 0 00:09:16.437 }, 00:09:16.437 { 00:09:16.437 "name": "BaseBdev3", 00:09:16.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.437 "is_configured": false, 00:09:16.437 "data_offset": 0, 00:09:16.437 "data_size": 0 00:09:16.437 } 00:09:16.437 ] 00:09:16.437 }' 00:09:16.437 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.437 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 [2024-11-17 01:29:25.072507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.696 BaseBdev2 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 [ 00:09:16.696 { 00:09:16.696 "name": "BaseBdev2", 00:09:16.696 "aliases": [ 00:09:16.696 "3896e497-268e-486c-8f72-85b80e7012d7" 00:09:16.696 ], 00:09:16.696 "product_name": "Malloc disk", 00:09:16.696 "block_size": 512, 00:09:16.696 "num_blocks": 65536, 00:09:16.696 "uuid": "3896e497-268e-486c-8f72-85b80e7012d7", 00:09:16.696 "assigned_rate_limits": { 00:09:16.696 "rw_ios_per_sec": 0, 00:09:16.696 "rw_mbytes_per_sec": 0, 00:09:16.696 "r_mbytes_per_sec": 0, 00:09:16.696 "w_mbytes_per_sec": 0 00:09:16.696 }, 00:09:16.696 "claimed": true, 00:09:16.696 "claim_type": "exclusive_write", 00:09:16.696 "zoned": false, 00:09:16.696 "supported_io_types": { 00:09:16.696 "read": true, 00:09:16.696 "write": true, 00:09:16.696 "unmap": true, 00:09:16.696 "flush": true, 00:09:16.696 "reset": true, 00:09:16.696 "nvme_admin": false, 00:09:16.696 "nvme_io": false, 00:09:16.696 "nvme_io_md": false, 00:09:16.696 "write_zeroes": true, 00:09:16.696 "zcopy": true, 00:09:16.696 "get_zone_info": false, 00:09:16.696 "zone_management": false, 00:09:16.696 "zone_append": false, 00:09:16.696 "compare": false, 00:09:16.696 "compare_and_write": false, 00:09:16.696 "abort": true, 00:09:16.696 "seek_hole": false, 00:09:16.696 "seek_data": false, 00:09:16.696 "copy": true, 00:09:16.696 "nvme_iov_md": false 00:09:16.696 }, 00:09:16.696 "memory_domains": [ 00:09:16.696 { 00:09:16.696 "dma_device_id": "system", 00:09:16.696 "dma_device_type": 1 00:09:16.696 }, 00:09:16.696 { 00:09:16.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.696 "dma_device_type": 2 00:09:16.696 } 00:09:16.696 ], 00:09:16.696 "driver_specific": {} 00:09:16.696 } 00:09:16.696 ] 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.696 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.954 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.954 "name": "Existed_Raid", 00:09:16.954 "uuid": "3b3cf9bc-5ec8-48cf-9ad9-df359a0dc5c8", 00:09:16.954 "strip_size_kb": 64, 00:09:16.955 "state": "configuring", 00:09:16.955 "raid_level": "raid0", 00:09:16.955 "superblock": true, 00:09:16.955 "num_base_bdevs": 3, 00:09:16.955 "num_base_bdevs_discovered": 2, 00:09:16.955 "num_base_bdevs_operational": 3, 00:09:16.955 "base_bdevs_list": [ 00:09:16.955 { 00:09:16.955 "name": "BaseBdev1", 00:09:16.955 "uuid": "4c89c8b2-fba7-46b2-b777-b4fd23500d11", 00:09:16.955 "is_configured": true, 00:09:16.955 "data_offset": 2048, 00:09:16.955 "data_size": 63488 00:09:16.955 }, 00:09:16.955 { 00:09:16.955 "name": "BaseBdev2", 00:09:16.955 "uuid": "3896e497-268e-486c-8f72-85b80e7012d7", 00:09:16.955 "is_configured": true, 00:09:16.955 "data_offset": 2048, 00:09:16.955 "data_size": 63488 00:09:16.955 }, 00:09:16.955 { 00:09:16.955 "name": "BaseBdev3", 00:09:16.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.955 "is_configured": false, 00:09:16.955 "data_offset": 0, 00:09:16.955 "data_size": 0 00:09:16.955 } 00:09:16.955 ] 00:09:16.955 }' 00:09:16.955 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.955 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.214 [2024-11-17 01:29:25.586298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.214 [2024-11-17 01:29:25.586616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.214 [2024-11-17 01:29:25.586643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.214 [2024-11-17 01:29:25.586966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:17.214 [2024-11-17 01:29:25.587127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.214 [2024-11-17 01:29:25.587137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:17.214 BaseBdev3 00:09:17.214 [2024-11-17 01:29:25.587293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.214 [ 00:09:17.214 { 00:09:17.214 "name": "BaseBdev3", 00:09:17.214 "aliases": [ 00:09:17.214 "5e847786-16fb-4219-ac6c-2082084ea840" 00:09:17.214 ], 00:09:17.214 "product_name": "Malloc disk", 00:09:17.214 "block_size": 512, 00:09:17.214 "num_blocks": 65536, 00:09:17.214 "uuid": "5e847786-16fb-4219-ac6c-2082084ea840", 00:09:17.214 "assigned_rate_limits": { 00:09:17.214 "rw_ios_per_sec": 0, 00:09:17.214 "rw_mbytes_per_sec": 0, 00:09:17.214 "r_mbytes_per_sec": 0, 00:09:17.214 "w_mbytes_per_sec": 0 00:09:17.214 }, 00:09:17.214 "claimed": true, 00:09:17.214 "claim_type": "exclusive_write", 00:09:17.214 "zoned": false, 00:09:17.214 "supported_io_types": { 00:09:17.214 "read": true, 00:09:17.214 "write": true, 00:09:17.214 "unmap": true, 00:09:17.214 "flush": true, 00:09:17.214 "reset": true, 00:09:17.214 "nvme_admin": false, 00:09:17.214 "nvme_io": false, 00:09:17.214 "nvme_io_md": false, 00:09:17.214 "write_zeroes": true, 00:09:17.214 "zcopy": true, 00:09:17.214 "get_zone_info": false, 00:09:17.214 "zone_management": false, 00:09:17.214 "zone_append": false, 00:09:17.214 "compare": false, 00:09:17.214 "compare_and_write": false, 00:09:17.214 "abort": true, 00:09:17.214 "seek_hole": false, 00:09:17.214 "seek_data": false, 00:09:17.214 "copy": true, 00:09:17.214 "nvme_iov_md": false 00:09:17.214 }, 00:09:17.214 "memory_domains": [ 00:09:17.214 { 00:09:17.214 "dma_device_id": "system", 00:09:17.214 "dma_device_type": 1 00:09:17.214 }, 00:09:17.214 { 00:09:17.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.214 "dma_device_type": 2 00:09:17.214 } 00:09:17.214 ], 00:09:17.214 "driver_specific": {} 00:09:17.214 } 00:09:17.214 ] 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.214 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.474 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.474 "name": "Existed_Raid", 00:09:17.474 "uuid": "3b3cf9bc-5ec8-48cf-9ad9-df359a0dc5c8", 00:09:17.474 "strip_size_kb": 64, 00:09:17.474 "state": "online", 00:09:17.474 "raid_level": "raid0", 00:09:17.474 "superblock": true, 00:09:17.474 "num_base_bdevs": 3, 00:09:17.474 "num_base_bdevs_discovered": 3, 00:09:17.474 "num_base_bdevs_operational": 3, 00:09:17.474 "base_bdevs_list": [ 00:09:17.474 { 00:09:17.474 "name": "BaseBdev1", 00:09:17.474 "uuid": "4c89c8b2-fba7-46b2-b777-b4fd23500d11", 00:09:17.474 "is_configured": true, 00:09:17.474 "data_offset": 2048, 00:09:17.474 "data_size": 63488 00:09:17.474 }, 00:09:17.474 { 00:09:17.474 "name": "BaseBdev2", 00:09:17.474 "uuid": "3896e497-268e-486c-8f72-85b80e7012d7", 00:09:17.474 "is_configured": true, 00:09:17.474 "data_offset": 2048, 00:09:17.474 "data_size": 63488 00:09:17.474 }, 00:09:17.474 { 00:09:17.474 "name": "BaseBdev3", 00:09:17.474 "uuid": "5e847786-16fb-4219-ac6c-2082084ea840", 00:09:17.474 "is_configured": true, 00:09:17.474 "data_offset": 2048, 00:09:17.474 "data_size": 63488 00:09:17.474 } 00:09:17.474 ] 00:09:17.474 }' 00:09:17.474 01:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.474 01:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.733 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.734 [2024-11-17 01:29:26.077830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.734 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.734 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.734 "name": "Existed_Raid", 00:09:17.734 "aliases": [ 00:09:17.734 "3b3cf9bc-5ec8-48cf-9ad9-df359a0dc5c8" 00:09:17.734 ], 00:09:17.734 "product_name": "Raid Volume", 00:09:17.734 "block_size": 512, 00:09:17.734 "num_blocks": 190464, 00:09:17.734 "uuid": "3b3cf9bc-5ec8-48cf-9ad9-df359a0dc5c8", 00:09:17.734 "assigned_rate_limits": { 00:09:17.734 "rw_ios_per_sec": 0, 00:09:17.734 "rw_mbytes_per_sec": 0, 00:09:17.734 "r_mbytes_per_sec": 0, 00:09:17.734 "w_mbytes_per_sec": 0 00:09:17.734 }, 00:09:17.734 "claimed": false, 00:09:17.734 "zoned": false, 00:09:17.734 "supported_io_types": { 00:09:17.734 "read": true, 00:09:17.734 "write": true, 00:09:17.734 "unmap": true, 00:09:17.734 "flush": true, 00:09:17.734 "reset": true, 00:09:17.734 "nvme_admin": false, 00:09:17.734 "nvme_io": false, 00:09:17.734 "nvme_io_md": false, 00:09:17.734 "write_zeroes": true, 00:09:17.734 "zcopy": false, 00:09:17.734 "get_zone_info": false, 00:09:17.734 "zone_management": false, 00:09:17.734 "zone_append": false, 00:09:17.734 "compare": false, 00:09:17.734 "compare_and_write": false, 00:09:17.734 "abort": false, 00:09:17.734 "seek_hole": false, 00:09:17.734 "seek_data": false, 00:09:17.734 "copy": false, 00:09:17.734 "nvme_iov_md": false 00:09:17.734 }, 00:09:17.734 "memory_domains": [ 00:09:17.734 { 00:09:17.734 "dma_device_id": "system", 00:09:17.734 "dma_device_type": 1 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.734 "dma_device_type": 2 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "dma_device_id": "system", 00:09:17.734 "dma_device_type": 1 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.734 "dma_device_type": 2 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "dma_device_id": "system", 00:09:17.734 "dma_device_type": 1 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.734 "dma_device_type": 2 00:09:17.734 } 00:09:17.734 ], 00:09:17.734 "driver_specific": { 00:09:17.734 "raid": { 00:09:17.734 "uuid": "3b3cf9bc-5ec8-48cf-9ad9-df359a0dc5c8", 00:09:17.734 "strip_size_kb": 64, 00:09:17.734 "state": "online", 00:09:17.734 "raid_level": "raid0", 00:09:17.734 "superblock": true, 00:09:17.734 "num_base_bdevs": 3, 00:09:17.734 "num_base_bdevs_discovered": 3, 00:09:17.734 "num_base_bdevs_operational": 3, 00:09:17.734 "base_bdevs_list": [ 00:09:17.734 { 00:09:17.734 "name": "BaseBdev1", 00:09:17.734 "uuid": "4c89c8b2-fba7-46b2-b777-b4fd23500d11", 00:09:17.734 "is_configured": true, 00:09:17.734 "data_offset": 2048, 00:09:17.734 "data_size": 63488 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "name": "BaseBdev2", 00:09:17.734 "uuid": "3896e497-268e-486c-8f72-85b80e7012d7", 00:09:17.734 "is_configured": true, 00:09:17.734 "data_offset": 2048, 00:09:17.734 "data_size": 63488 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "name": "BaseBdev3", 00:09:17.734 "uuid": "5e847786-16fb-4219-ac6c-2082084ea840", 00:09:17.734 "is_configured": true, 00:09:17.734 "data_offset": 2048, 00:09:17.734 "data_size": 63488 00:09:17.734 } 00:09:17.734 ] 00:09:17.734 } 00:09:17.734 } 00:09:17.734 }' 00:09:17.734 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.734 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:17.734 BaseBdev2 00:09:17.734 BaseBdev3' 00:09:17.734 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.993 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 [2024-11-17 01:29:26.377062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.993 [2024-11-17 01:29:26.377088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.993 [2024-11-17 01:29:26.377142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.252 "name": "Existed_Raid", 00:09:18.252 "uuid": "3b3cf9bc-5ec8-48cf-9ad9-df359a0dc5c8", 00:09:18.252 "strip_size_kb": 64, 00:09:18.252 "state": "offline", 00:09:18.252 "raid_level": "raid0", 00:09:18.252 "superblock": true, 00:09:18.252 "num_base_bdevs": 3, 00:09:18.252 "num_base_bdevs_discovered": 2, 00:09:18.252 "num_base_bdevs_operational": 2, 00:09:18.252 "base_bdevs_list": [ 00:09:18.252 { 00:09:18.252 "name": null, 00:09:18.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.252 "is_configured": false, 00:09:18.252 "data_offset": 0, 00:09:18.252 "data_size": 63488 00:09:18.252 }, 00:09:18.252 { 00:09:18.252 "name": "BaseBdev2", 00:09:18.252 "uuid": "3896e497-268e-486c-8f72-85b80e7012d7", 00:09:18.252 "is_configured": true, 00:09:18.252 "data_offset": 2048, 00:09:18.252 "data_size": 63488 00:09:18.252 }, 00:09:18.252 { 00:09:18.252 "name": "BaseBdev3", 00:09:18.252 "uuid": "5e847786-16fb-4219-ac6c-2082084ea840", 00:09:18.252 "is_configured": true, 00:09:18.252 "data_offset": 2048, 00:09:18.252 "data_size": 63488 00:09:18.252 } 00:09:18.252 ] 00:09:18.252 }' 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.252 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.512 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:18.512 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.512 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.512 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.512 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.512 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.512 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.771 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.771 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.771 01:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:18.771 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.771 01:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.771 [2024-11-17 01:29:26.977731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.771 [2024-11-17 01:29:27.128201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.771 [2024-11-17 01:29:27.128248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.771 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.031 BaseBdev2 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.031 [ 00:09:19.031 { 00:09:19.031 "name": "BaseBdev2", 00:09:19.031 "aliases": [ 00:09:19.031 "765b1bb4-b2bf-44f5-a654-93fa6b043174" 00:09:19.031 ], 00:09:19.031 "product_name": "Malloc disk", 00:09:19.031 "block_size": 512, 00:09:19.031 "num_blocks": 65536, 00:09:19.031 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:19.031 "assigned_rate_limits": { 00:09:19.031 "rw_ios_per_sec": 0, 00:09:19.031 "rw_mbytes_per_sec": 0, 00:09:19.031 "r_mbytes_per_sec": 0, 00:09:19.031 "w_mbytes_per_sec": 0 00:09:19.031 }, 00:09:19.031 "claimed": false, 00:09:19.031 "zoned": false, 00:09:19.031 "supported_io_types": { 00:09:19.031 "read": true, 00:09:19.031 "write": true, 00:09:19.031 "unmap": true, 00:09:19.031 "flush": true, 00:09:19.031 "reset": true, 00:09:19.031 "nvme_admin": false, 00:09:19.031 "nvme_io": false, 00:09:19.031 "nvme_io_md": false, 00:09:19.031 "write_zeroes": true, 00:09:19.031 "zcopy": true, 00:09:19.031 "get_zone_info": false, 00:09:19.031 "zone_management": false, 00:09:19.031 "zone_append": false, 00:09:19.031 "compare": false, 00:09:19.031 "compare_and_write": false, 00:09:19.031 "abort": true, 00:09:19.031 "seek_hole": false, 00:09:19.031 "seek_data": false, 00:09:19.031 "copy": true, 00:09:19.031 "nvme_iov_md": false 00:09:19.031 }, 00:09:19.031 "memory_domains": [ 00:09:19.031 { 00:09:19.031 "dma_device_id": "system", 00:09:19.031 "dma_device_type": 1 00:09:19.031 }, 00:09:19.031 { 00:09:19.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.031 "dma_device_type": 2 00:09:19.031 } 00:09:19.031 ], 00:09:19.031 "driver_specific": {} 00:09:19.031 } 00:09:19.031 ] 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.031 BaseBdev3 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.031 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.032 [ 00:09:19.032 { 00:09:19.032 "name": "BaseBdev3", 00:09:19.032 "aliases": [ 00:09:19.032 "0132110e-cb70-429e-9e87-e504ef86bce4" 00:09:19.032 ], 00:09:19.032 "product_name": "Malloc disk", 00:09:19.032 "block_size": 512, 00:09:19.032 "num_blocks": 65536, 00:09:19.032 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:19.032 "assigned_rate_limits": { 00:09:19.032 "rw_ios_per_sec": 0, 00:09:19.032 "rw_mbytes_per_sec": 0, 00:09:19.032 "r_mbytes_per_sec": 0, 00:09:19.032 "w_mbytes_per_sec": 0 00:09:19.032 }, 00:09:19.032 "claimed": false, 00:09:19.032 "zoned": false, 00:09:19.032 "supported_io_types": { 00:09:19.032 "read": true, 00:09:19.032 "write": true, 00:09:19.032 "unmap": true, 00:09:19.032 "flush": true, 00:09:19.032 "reset": true, 00:09:19.032 "nvme_admin": false, 00:09:19.032 "nvme_io": false, 00:09:19.032 "nvme_io_md": false, 00:09:19.032 "write_zeroes": true, 00:09:19.032 "zcopy": true, 00:09:19.032 "get_zone_info": false, 00:09:19.032 "zone_management": false, 00:09:19.032 "zone_append": false, 00:09:19.032 "compare": false, 00:09:19.032 "compare_and_write": false, 00:09:19.032 "abort": true, 00:09:19.032 "seek_hole": false, 00:09:19.032 "seek_data": false, 00:09:19.032 "copy": true, 00:09:19.032 "nvme_iov_md": false 00:09:19.032 }, 00:09:19.032 "memory_domains": [ 00:09:19.032 { 00:09:19.032 "dma_device_id": "system", 00:09:19.032 "dma_device_type": 1 00:09:19.032 }, 00:09:19.032 { 00:09:19.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.032 "dma_device_type": 2 00:09:19.032 } 00:09:19.032 ], 00:09:19.032 "driver_specific": {} 00:09:19.032 } 00:09:19.032 ] 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.032 [2024-11-17 01:29:27.436502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.032 [2024-11-17 01:29:27.436586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.032 [2024-11-17 01:29:27.436627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.032 [2024-11-17 01:29:27.438340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.032 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.292 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.292 "name": "Existed_Raid", 00:09:19.292 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:19.292 "strip_size_kb": 64, 00:09:19.292 "state": "configuring", 00:09:19.292 "raid_level": "raid0", 00:09:19.292 "superblock": true, 00:09:19.292 "num_base_bdevs": 3, 00:09:19.292 "num_base_bdevs_discovered": 2, 00:09:19.292 "num_base_bdevs_operational": 3, 00:09:19.292 "base_bdevs_list": [ 00:09:19.292 { 00:09:19.292 "name": "BaseBdev1", 00:09:19.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.292 "is_configured": false, 00:09:19.292 "data_offset": 0, 00:09:19.292 "data_size": 0 00:09:19.292 }, 00:09:19.292 { 00:09:19.292 "name": "BaseBdev2", 00:09:19.292 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:19.292 "is_configured": true, 00:09:19.292 "data_offset": 2048, 00:09:19.292 "data_size": 63488 00:09:19.292 }, 00:09:19.292 { 00:09:19.292 "name": "BaseBdev3", 00:09:19.292 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:19.292 "is_configured": true, 00:09:19.292 "data_offset": 2048, 00:09:19.292 "data_size": 63488 00:09:19.292 } 00:09:19.292 ] 00:09:19.292 }' 00:09:19.292 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.292 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.552 [2024-11-17 01:29:27.895726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.552 "name": "Existed_Raid", 00:09:19.552 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:19.552 "strip_size_kb": 64, 00:09:19.552 "state": "configuring", 00:09:19.552 "raid_level": "raid0", 00:09:19.552 "superblock": true, 00:09:19.552 "num_base_bdevs": 3, 00:09:19.552 "num_base_bdevs_discovered": 1, 00:09:19.552 "num_base_bdevs_operational": 3, 00:09:19.552 "base_bdevs_list": [ 00:09:19.552 { 00:09:19.552 "name": "BaseBdev1", 00:09:19.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.552 "is_configured": false, 00:09:19.552 "data_offset": 0, 00:09:19.552 "data_size": 0 00:09:19.552 }, 00:09:19.552 { 00:09:19.552 "name": null, 00:09:19.552 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:19.552 "is_configured": false, 00:09:19.552 "data_offset": 0, 00:09:19.552 "data_size": 63488 00:09:19.552 }, 00:09:19.552 { 00:09:19.552 "name": "BaseBdev3", 00:09:19.552 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:19.552 "is_configured": true, 00:09:19.552 "data_offset": 2048, 00:09:19.552 "data_size": 63488 00:09:19.552 } 00:09:19.552 ] 00:09:19.552 }' 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.552 01:29:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.121 [2024-11-17 01:29:28.431566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.121 BaseBdev1 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.121 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.122 [ 00:09:20.122 { 00:09:20.122 "name": "BaseBdev1", 00:09:20.122 "aliases": [ 00:09:20.122 "876a971d-53f0-40dd-989e-96b5126bcdae" 00:09:20.122 ], 00:09:20.122 "product_name": "Malloc disk", 00:09:20.122 "block_size": 512, 00:09:20.122 "num_blocks": 65536, 00:09:20.122 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:20.122 "assigned_rate_limits": { 00:09:20.122 "rw_ios_per_sec": 0, 00:09:20.122 "rw_mbytes_per_sec": 0, 00:09:20.122 "r_mbytes_per_sec": 0, 00:09:20.122 "w_mbytes_per_sec": 0 00:09:20.122 }, 00:09:20.122 "claimed": true, 00:09:20.122 "claim_type": "exclusive_write", 00:09:20.122 "zoned": false, 00:09:20.122 "supported_io_types": { 00:09:20.122 "read": true, 00:09:20.122 "write": true, 00:09:20.122 "unmap": true, 00:09:20.122 "flush": true, 00:09:20.122 "reset": true, 00:09:20.122 "nvme_admin": false, 00:09:20.122 "nvme_io": false, 00:09:20.122 "nvme_io_md": false, 00:09:20.122 "write_zeroes": true, 00:09:20.122 "zcopy": true, 00:09:20.122 "get_zone_info": false, 00:09:20.122 "zone_management": false, 00:09:20.122 "zone_append": false, 00:09:20.122 "compare": false, 00:09:20.122 "compare_and_write": false, 00:09:20.122 "abort": true, 00:09:20.122 "seek_hole": false, 00:09:20.122 "seek_data": false, 00:09:20.122 "copy": true, 00:09:20.122 "nvme_iov_md": false 00:09:20.122 }, 00:09:20.122 "memory_domains": [ 00:09:20.122 { 00:09:20.122 "dma_device_id": "system", 00:09:20.122 "dma_device_type": 1 00:09:20.122 }, 00:09:20.122 { 00:09:20.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.122 "dma_device_type": 2 00:09:20.122 } 00:09:20.122 ], 00:09:20.122 "driver_specific": {} 00:09:20.122 } 00:09:20.122 ] 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.122 "name": "Existed_Raid", 00:09:20.122 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:20.122 "strip_size_kb": 64, 00:09:20.122 "state": "configuring", 00:09:20.122 "raid_level": "raid0", 00:09:20.122 "superblock": true, 00:09:20.122 "num_base_bdevs": 3, 00:09:20.122 "num_base_bdevs_discovered": 2, 00:09:20.122 "num_base_bdevs_operational": 3, 00:09:20.122 "base_bdevs_list": [ 00:09:20.122 { 00:09:20.122 "name": "BaseBdev1", 00:09:20.122 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:20.122 "is_configured": true, 00:09:20.122 "data_offset": 2048, 00:09:20.122 "data_size": 63488 00:09:20.122 }, 00:09:20.122 { 00:09:20.122 "name": null, 00:09:20.122 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:20.122 "is_configured": false, 00:09:20.122 "data_offset": 0, 00:09:20.122 "data_size": 63488 00:09:20.122 }, 00:09:20.122 { 00:09:20.122 "name": "BaseBdev3", 00:09:20.122 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:20.122 "is_configured": true, 00:09:20.122 "data_offset": 2048, 00:09:20.122 "data_size": 63488 00:09:20.122 } 00:09:20.122 ] 00:09:20.122 }' 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.122 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:20.691 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.692 [2024-11-17 01:29:28.958755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.692 01:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.692 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.692 "name": "Existed_Raid", 00:09:20.692 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:20.692 "strip_size_kb": 64, 00:09:20.692 "state": "configuring", 00:09:20.692 "raid_level": "raid0", 00:09:20.692 "superblock": true, 00:09:20.692 "num_base_bdevs": 3, 00:09:20.692 "num_base_bdevs_discovered": 1, 00:09:20.692 "num_base_bdevs_operational": 3, 00:09:20.692 "base_bdevs_list": [ 00:09:20.692 { 00:09:20.692 "name": "BaseBdev1", 00:09:20.692 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:20.692 "is_configured": true, 00:09:20.692 "data_offset": 2048, 00:09:20.692 "data_size": 63488 00:09:20.692 }, 00:09:20.692 { 00:09:20.692 "name": null, 00:09:20.692 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:20.692 "is_configured": false, 00:09:20.692 "data_offset": 0, 00:09:20.692 "data_size": 63488 00:09:20.692 }, 00:09:20.692 { 00:09:20.692 "name": null, 00:09:20.692 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:20.692 "is_configured": false, 00:09:20.692 "data_offset": 0, 00:09:20.692 "data_size": 63488 00:09:20.692 } 00:09:20.692 ] 00:09:20.692 }' 00:09:20.692 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.692 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.950 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.950 [2024-11-17 01:29:29.405992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.209 "name": "Existed_Raid", 00:09:21.209 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:21.209 "strip_size_kb": 64, 00:09:21.209 "state": "configuring", 00:09:21.209 "raid_level": "raid0", 00:09:21.209 "superblock": true, 00:09:21.209 "num_base_bdevs": 3, 00:09:21.209 "num_base_bdevs_discovered": 2, 00:09:21.209 "num_base_bdevs_operational": 3, 00:09:21.209 "base_bdevs_list": [ 00:09:21.209 { 00:09:21.209 "name": "BaseBdev1", 00:09:21.209 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:21.209 "is_configured": true, 00:09:21.209 "data_offset": 2048, 00:09:21.209 "data_size": 63488 00:09:21.209 }, 00:09:21.209 { 00:09:21.209 "name": null, 00:09:21.209 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:21.209 "is_configured": false, 00:09:21.209 "data_offset": 0, 00:09:21.209 "data_size": 63488 00:09:21.209 }, 00:09:21.209 { 00:09:21.209 "name": "BaseBdev3", 00:09:21.209 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:21.209 "is_configured": true, 00:09:21.209 "data_offset": 2048, 00:09:21.209 "data_size": 63488 00:09:21.209 } 00:09:21.209 ] 00:09:21.209 }' 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.209 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.469 [2024-11-17 01:29:29.825272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.469 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.729 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.729 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.729 "name": "Existed_Raid", 00:09:21.729 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:21.729 "strip_size_kb": 64, 00:09:21.729 "state": "configuring", 00:09:21.729 "raid_level": "raid0", 00:09:21.729 "superblock": true, 00:09:21.729 "num_base_bdevs": 3, 00:09:21.729 "num_base_bdevs_discovered": 1, 00:09:21.729 "num_base_bdevs_operational": 3, 00:09:21.729 "base_bdevs_list": [ 00:09:21.729 { 00:09:21.729 "name": null, 00:09:21.729 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:21.729 "is_configured": false, 00:09:21.729 "data_offset": 0, 00:09:21.729 "data_size": 63488 00:09:21.729 }, 00:09:21.729 { 00:09:21.729 "name": null, 00:09:21.729 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:21.729 "is_configured": false, 00:09:21.729 "data_offset": 0, 00:09:21.729 "data_size": 63488 00:09:21.729 }, 00:09:21.729 { 00:09:21.729 "name": "BaseBdev3", 00:09:21.729 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:21.729 "is_configured": true, 00:09:21.729 "data_offset": 2048, 00:09:21.729 "data_size": 63488 00:09:21.729 } 00:09:21.729 ] 00:09:21.729 }' 00:09:21.729 01:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.729 01:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.991 [2024-11-17 01:29:30.370905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.991 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.991 "name": "Existed_Raid", 00:09:21.991 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:21.991 "strip_size_kb": 64, 00:09:21.991 "state": "configuring", 00:09:21.991 "raid_level": "raid0", 00:09:21.991 "superblock": true, 00:09:21.991 "num_base_bdevs": 3, 00:09:21.991 "num_base_bdevs_discovered": 2, 00:09:21.991 "num_base_bdevs_operational": 3, 00:09:21.991 "base_bdevs_list": [ 00:09:21.991 { 00:09:21.991 "name": null, 00:09:21.991 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:21.991 "is_configured": false, 00:09:21.991 "data_offset": 0, 00:09:21.991 "data_size": 63488 00:09:21.991 }, 00:09:21.991 { 00:09:21.991 "name": "BaseBdev2", 00:09:21.991 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:21.991 "is_configured": true, 00:09:21.991 "data_offset": 2048, 00:09:21.992 "data_size": 63488 00:09:21.992 }, 00:09:21.992 { 00:09:21.992 "name": "BaseBdev3", 00:09:21.992 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:21.992 "is_configured": true, 00:09:21.992 "data_offset": 2048, 00:09:21.992 "data_size": 63488 00:09:21.992 } 00:09:21.992 ] 00:09:21.992 }' 00:09:21.992 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.992 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 876a971d-53f0-40dd-989e-96b5126bcdae 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 [2024-11-17 01:29:30.977253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:22.565 [2024-11-17 01:29:30.977462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:22.565 [2024-11-17 01:29:30.977478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.565 [2024-11-17 01:29:30.977712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:22.565 [2024-11-17 01:29:30.977888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:22.565 [2024-11-17 01:29:30.977903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:22.565 [2024-11-17 01:29:30.978046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.565 NewBaseBdev 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 01:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 [ 00:09:22.565 { 00:09:22.565 "name": "NewBaseBdev", 00:09:22.565 "aliases": [ 00:09:22.565 "876a971d-53f0-40dd-989e-96b5126bcdae" 00:09:22.565 ], 00:09:22.565 "product_name": "Malloc disk", 00:09:22.565 "block_size": 512, 00:09:22.565 "num_blocks": 65536, 00:09:22.565 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:22.565 "assigned_rate_limits": { 00:09:22.565 "rw_ios_per_sec": 0, 00:09:22.565 "rw_mbytes_per_sec": 0, 00:09:22.565 "r_mbytes_per_sec": 0, 00:09:22.565 "w_mbytes_per_sec": 0 00:09:22.565 }, 00:09:22.565 "claimed": true, 00:09:22.565 "claim_type": "exclusive_write", 00:09:22.565 "zoned": false, 00:09:22.565 "supported_io_types": { 00:09:22.565 "read": true, 00:09:22.565 "write": true, 00:09:22.565 "unmap": true, 00:09:22.565 "flush": true, 00:09:22.565 "reset": true, 00:09:22.565 "nvme_admin": false, 00:09:22.565 "nvme_io": false, 00:09:22.565 "nvme_io_md": false, 00:09:22.565 "write_zeroes": true, 00:09:22.565 "zcopy": true, 00:09:22.565 "get_zone_info": false, 00:09:22.565 "zone_management": false, 00:09:22.565 "zone_append": false, 00:09:22.565 "compare": false, 00:09:22.565 "compare_and_write": false, 00:09:22.565 "abort": true, 00:09:22.565 "seek_hole": false, 00:09:22.565 "seek_data": false, 00:09:22.565 "copy": true, 00:09:22.565 "nvme_iov_md": false 00:09:22.565 }, 00:09:22.565 "memory_domains": [ 00:09:22.565 { 00:09:22.565 "dma_device_id": "system", 00:09:22.565 "dma_device_type": 1 00:09:22.565 }, 00:09:22.565 { 00:09:22.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.565 "dma_device_type": 2 00:09:22.565 } 00:09:22.565 ], 00:09:22.565 "driver_specific": {} 00:09:22.565 } 00:09:22.565 ] 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.565 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.825 "name": "Existed_Raid", 00:09:22.825 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:22.825 "strip_size_kb": 64, 00:09:22.825 "state": "online", 00:09:22.825 "raid_level": "raid0", 00:09:22.825 "superblock": true, 00:09:22.825 "num_base_bdevs": 3, 00:09:22.825 "num_base_bdevs_discovered": 3, 00:09:22.825 "num_base_bdevs_operational": 3, 00:09:22.825 "base_bdevs_list": [ 00:09:22.825 { 00:09:22.825 "name": "NewBaseBdev", 00:09:22.825 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:22.825 "is_configured": true, 00:09:22.825 "data_offset": 2048, 00:09:22.825 "data_size": 63488 00:09:22.825 }, 00:09:22.825 { 00:09:22.825 "name": "BaseBdev2", 00:09:22.825 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:22.825 "is_configured": true, 00:09:22.825 "data_offset": 2048, 00:09:22.825 "data_size": 63488 00:09:22.825 }, 00:09:22.825 { 00:09:22.825 "name": "BaseBdev3", 00:09:22.825 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:22.825 "is_configured": true, 00:09:22.825 "data_offset": 2048, 00:09:22.825 "data_size": 63488 00:09:22.825 } 00:09:22.825 ] 00:09:22.825 }' 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.825 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.085 [2024-11-17 01:29:31.448810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.085 "name": "Existed_Raid", 00:09:23.085 "aliases": [ 00:09:23.085 "e6c27690-af0a-4b19-80ee-7f99943d2731" 00:09:23.085 ], 00:09:23.085 "product_name": "Raid Volume", 00:09:23.085 "block_size": 512, 00:09:23.085 "num_blocks": 190464, 00:09:23.085 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:23.085 "assigned_rate_limits": { 00:09:23.085 "rw_ios_per_sec": 0, 00:09:23.085 "rw_mbytes_per_sec": 0, 00:09:23.085 "r_mbytes_per_sec": 0, 00:09:23.085 "w_mbytes_per_sec": 0 00:09:23.085 }, 00:09:23.085 "claimed": false, 00:09:23.085 "zoned": false, 00:09:23.085 "supported_io_types": { 00:09:23.085 "read": true, 00:09:23.085 "write": true, 00:09:23.085 "unmap": true, 00:09:23.085 "flush": true, 00:09:23.085 "reset": true, 00:09:23.085 "nvme_admin": false, 00:09:23.085 "nvme_io": false, 00:09:23.085 "nvme_io_md": false, 00:09:23.085 "write_zeroes": true, 00:09:23.085 "zcopy": false, 00:09:23.085 "get_zone_info": false, 00:09:23.085 "zone_management": false, 00:09:23.085 "zone_append": false, 00:09:23.085 "compare": false, 00:09:23.085 "compare_and_write": false, 00:09:23.085 "abort": false, 00:09:23.085 "seek_hole": false, 00:09:23.085 "seek_data": false, 00:09:23.085 "copy": false, 00:09:23.085 "nvme_iov_md": false 00:09:23.085 }, 00:09:23.085 "memory_domains": [ 00:09:23.085 { 00:09:23.085 "dma_device_id": "system", 00:09:23.085 "dma_device_type": 1 00:09:23.085 }, 00:09:23.085 { 00:09:23.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.085 "dma_device_type": 2 00:09:23.085 }, 00:09:23.085 { 00:09:23.085 "dma_device_id": "system", 00:09:23.085 "dma_device_type": 1 00:09:23.085 }, 00:09:23.085 { 00:09:23.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.085 "dma_device_type": 2 00:09:23.085 }, 00:09:23.085 { 00:09:23.085 "dma_device_id": "system", 00:09:23.085 "dma_device_type": 1 00:09:23.085 }, 00:09:23.085 { 00:09:23.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.085 "dma_device_type": 2 00:09:23.085 } 00:09:23.085 ], 00:09:23.085 "driver_specific": { 00:09:23.085 "raid": { 00:09:23.085 "uuid": "e6c27690-af0a-4b19-80ee-7f99943d2731", 00:09:23.085 "strip_size_kb": 64, 00:09:23.085 "state": "online", 00:09:23.085 "raid_level": "raid0", 00:09:23.085 "superblock": true, 00:09:23.085 "num_base_bdevs": 3, 00:09:23.085 "num_base_bdevs_discovered": 3, 00:09:23.085 "num_base_bdevs_operational": 3, 00:09:23.085 "base_bdevs_list": [ 00:09:23.085 { 00:09:23.085 "name": "NewBaseBdev", 00:09:23.085 "uuid": "876a971d-53f0-40dd-989e-96b5126bcdae", 00:09:23.085 "is_configured": true, 00:09:23.085 "data_offset": 2048, 00:09:23.085 "data_size": 63488 00:09:23.085 }, 00:09:23.085 { 00:09:23.085 "name": "BaseBdev2", 00:09:23.085 "uuid": "765b1bb4-b2bf-44f5-a654-93fa6b043174", 00:09:23.085 "is_configured": true, 00:09:23.085 "data_offset": 2048, 00:09:23.085 "data_size": 63488 00:09:23.085 }, 00:09:23.085 { 00:09:23.085 "name": "BaseBdev3", 00:09:23.085 "uuid": "0132110e-cb70-429e-9e87-e504ef86bce4", 00:09:23.085 "is_configured": true, 00:09:23.085 "data_offset": 2048, 00:09:23.085 "data_size": 63488 00:09:23.085 } 00:09:23.085 ] 00:09:23.085 } 00:09:23.085 } 00:09:23.085 }' 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:23.085 BaseBdev2 00:09:23.085 BaseBdev3' 00:09:23.085 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.345 [2024-11-17 01:29:31.716042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.345 [2024-11-17 01:29:31.716073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.345 [2024-11-17 01:29:31.716144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.345 [2024-11-17 01:29:31.716194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.345 [2024-11-17 01:29:31.716205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64273 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64273 ']' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64273 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64273 00:09:23.345 killing process with pid 64273 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64273' 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64273 00:09:23.345 [2024-11-17 01:29:31.764076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.345 01:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64273 00:09:23.605 [2024-11-17 01:29:32.057155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.987 01:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:24.987 00:09:24.987 real 0m10.357s 00:09:24.987 user 0m16.503s 00:09:24.987 sys 0m1.864s 00:09:24.987 01:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.987 01:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.987 ************************************ 00:09:24.987 END TEST raid_state_function_test_sb 00:09:24.987 ************************************ 00:09:24.987 01:29:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:24.987 01:29:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:24.987 01:29:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.987 01:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.987 ************************************ 00:09:24.987 START TEST raid_superblock_test 00:09:24.987 ************************************ 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64893 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64893 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64893 ']' 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.987 01:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.987 [2024-11-17 01:29:33.273572] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:24.987 [2024-11-17 01:29:33.273719] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64893 ] 00:09:24.987 [2024-11-17 01:29:33.444518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.246 [2024-11-17 01:29:33.553743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.506 [2024-11-17 01:29:33.740131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.506 [2024-11-17 01:29:33.740174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:25.766 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 malloc1 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 [2024-11-17 01:29:34.149450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.767 [2024-11-17 01:29:34.149513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.767 [2024-11-17 01:29:34.149535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:25.767 [2024-11-17 01:29:34.149544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.767 [2024-11-17 01:29:34.151511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.767 [2024-11-17 01:29:34.151548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.767 pt1 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 malloc2 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.767 [2024-11-17 01:29:34.202724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:25.767 [2024-11-17 01:29:34.202782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.767 [2024-11-17 01:29:34.202804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:25.767 [2024-11-17 01:29:34.202812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.767 [2024-11-17 01:29:34.204717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.767 [2024-11-17 01:29:34.204752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:25.767 pt2 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.767 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.027 malloc3 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.027 [2024-11-17 01:29:34.266003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.027 [2024-11-17 01:29:34.266051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.027 [2024-11-17 01:29:34.266069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:26.027 [2024-11-17 01:29:34.266078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.027 [2024-11-17 01:29:34.268031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.027 [2024-11-17 01:29:34.268065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.027 pt3 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.027 [2024-11-17 01:29:34.278034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:26.027 [2024-11-17 01:29:34.279694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.027 [2024-11-17 01:29:34.279772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.027 [2024-11-17 01:29:34.279912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:26.027 [2024-11-17 01:29:34.279933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.027 [2024-11-17 01:29:34.280160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.027 [2024-11-17 01:29:34.280323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:26.027 [2024-11-17 01:29:34.280340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:26.027 [2024-11-17 01:29:34.280480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.027 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.028 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.028 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.028 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.028 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.028 "name": "raid_bdev1", 00:09:26.028 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:26.028 "strip_size_kb": 64, 00:09:26.028 "state": "online", 00:09:26.028 "raid_level": "raid0", 00:09:26.028 "superblock": true, 00:09:26.028 "num_base_bdevs": 3, 00:09:26.028 "num_base_bdevs_discovered": 3, 00:09:26.028 "num_base_bdevs_operational": 3, 00:09:26.028 "base_bdevs_list": [ 00:09:26.028 { 00:09:26.028 "name": "pt1", 00:09:26.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.028 "is_configured": true, 00:09:26.028 "data_offset": 2048, 00:09:26.028 "data_size": 63488 00:09:26.028 }, 00:09:26.028 { 00:09:26.028 "name": "pt2", 00:09:26.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.028 "is_configured": true, 00:09:26.028 "data_offset": 2048, 00:09:26.028 "data_size": 63488 00:09:26.028 }, 00:09:26.028 { 00:09:26.028 "name": "pt3", 00:09:26.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.028 "is_configured": true, 00:09:26.028 "data_offset": 2048, 00:09:26.028 "data_size": 63488 00:09:26.028 } 00:09:26.028 ] 00:09:26.028 }' 00:09:26.028 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.028 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.288 [2024-11-17 01:29:34.725559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.288 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.548 "name": "raid_bdev1", 00:09:26.548 "aliases": [ 00:09:26.548 "e29942c7-ea4a-4b17-95fa-7d26848b06e4" 00:09:26.548 ], 00:09:26.548 "product_name": "Raid Volume", 00:09:26.548 "block_size": 512, 00:09:26.548 "num_blocks": 190464, 00:09:26.548 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:26.548 "assigned_rate_limits": { 00:09:26.548 "rw_ios_per_sec": 0, 00:09:26.548 "rw_mbytes_per_sec": 0, 00:09:26.548 "r_mbytes_per_sec": 0, 00:09:26.548 "w_mbytes_per_sec": 0 00:09:26.548 }, 00:09:26.548 "claimed": false, 00:09:26.548 "zoned": false, 00:09:26.548 "supported_io_types": { 00:09:26.548 "read": true, 00:09:26.548 "write": true, 00:09:26.548 "unmap": true, 00:09:26.548 "flush": true, 00:09:26.548 "reset": true, 00:09:26.548 "nvme_admin": false, 00:09:26.548 "nvme_io": false, 00:09:26.548 "nvme_io_md": false, 00:09:26.548 "write_zeroes": true, 00:09:26.548 "zcopy": false, 00:09:26.548 "get_zone_info": false, 00:09:26.548 "zone_management": false, 00:09:26.548 "zone_append": false, 00:09:26.548 "compare": false, 00:09:26.548 "compare_and_write": false, 00:09:26.548 "abort": false, 00:09:26.548 "seek_hole": false, 00:09:26.548 "seek_data": false, 00:09:26.548 "copy": false, 00:09:26.548 "nvme_iov_md": false 00:09:26.548 }, 00:09:26.548 "memory_domains": [ 00:09:26.548 { 00:09:26.548 "dma_device_id": "system", 00:09:26.548 "dma_device_type": 1 00:09:26.548 }, 00:09:26.548 { 00:09:26.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.548 "dma_device_type": 2 00:09:26.548 }, 00:09:26.548 { 00:09:26.548 "dma_device_id": "system", 00:09:26.548 "dma_device_type": 1 00:09:26.548 }, 00:09:26.548 { 00:09:26.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.548 "dma_device_type": 2 00:09:26.548 }, 00:09:26.548 { 00:09:26.548 "dma_device_id": "system", 00:09:26.548 "dma_device_type": 1 00:09:26.548 }, 00:09:26.548 { 00:09:26.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.548 "dma_device_type": 2 00:09:26.548 } 00:09:26.548 ], 00:09:26.548 "driver_specific": { 00:09:26.548 "raid": { 00:09:26.548 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:26.548 "strip_size_kb": 64, 00:09:26.548 "state": "online", 00:09:26.548 "raid_level": "raid0", 00:09:26.548 "superblock": true, 00:09:26.548 "num_base_bdevs": 3, 00:09:26.548 "num_base_bdevs_discovered": 3, 00:09:26.548 "num_base_bdevs_operational": 3, 00:09:26.548 "base_bdevs_list": [ 00:09:26.548 { 00:09:26.548 "name": "pt1", 00:09:26.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.548 "is_configured": true, 00:09:26.548 "data_offset": 2048, 00:09:26.548 "data_size": 63488 00:09:26.548 }, 00:09:26.548 { 00:09:26.548 "name": "pt2", 00:09:26.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.548 "is_configured": true, 00:09:26.548 "data_offset": 2048, 00:09:26.548 "data_size": 63488 00:09:26.548 }, 00:09:26.548 { 00:09:26.548 "name": "pt3", 00:09:26.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.548 "is_configured": true, 00:09:26.548 "data_offset": 2048, 00:09:26.548 "data_size": 63488 00:09:26.548 } 00:09:26.548 ] 00:09:26.548 } 00:09:26.548 } 00:09:26.548 }' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:26.548 pt2 00:09:26.548 pt3' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:26.548 [2024-11-17 01:29:34.985044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.548 01:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e29942c7-ea4a-4b17-95fa-7d26848b06e4 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e29942c7-ea4a-4b17-95fa-7d26848b06e4 ']' 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.808 [2024-11-17 01:29:35.032718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.808 [2024-11-17 01:29:35.032752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.808 [2024-11-17 01:29:35.032839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.808 [2024-11-17 01:29:35.032898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.808 [2024-11-17 01:29:35.032910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.808 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 [2024-11-17 01:29:35.180477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:26.809 [2024-11-17 01:29:35.182251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:26.809 [2024-11-17 01:29:35.182305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:26.809 [2024-11-17 01:29:35.182351] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:26.809 [2024-11-17 01:29:35.182396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:26.809 [2024-11-17 01:29:35.182415] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:26.809 [2024-11-17 01:29:35.182432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.809 [2024-11-17 01:29:35.182444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:26.809 request: 00:09:26.809 { 00:09:26.809 "name": "raid_bdev1", 00:09:26.809 "raid_level": "raid0", 00:09:26.809 "base_bdevs": [ 00:09:26.809 "malloc1", 00:09:26.809 "malloc2", 00:09:26.809 "malloc3" 00:09:26.809 ], 00:09:26.809 "strip_size_kb": 64, 00:09:26.809 "superblock": false, 00:09:26.809 "method": "bdev_raid_create", 00:09:26.809 "req_id": 1 00:09:26.809 } 00:09:26.809 Got JSON-RPC error response 00:09:26.809 response: 00:09:26.809 { 00:09:26.809 "code": -17, 00:09:26.809 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:26.809 } 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 [2024-11-17 01:29:35.244365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:26.809 [2024-11-17 01:29:35.244431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.809 [2024-11-17 01:29:35.244450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:26.809 [2024-11-17 01:29:35.244460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.809 [2024-11-17 01:29:35.246571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.809 [2024-11-17 01:29:35.246607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:26.809 [2024-11-17 01:29:35.246697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:26.809 [2024-11-17 01:29:35.246753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:26.809 pt1 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.809 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.068 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.068 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.068 "name": "raid_bdev1", 00:09:27.068 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:27.068 "strip_size_kb": 64, 00:09:27.068 "state": "configuring", 00:09:27.068 "raid_level": "raid0", 00:09:27.068 "superblock": true, 00:09:27.068 "num_base_bdevs": 3, 00:09:27.068 "num_base_bdevs_discovered": 1, 00:09:27.068 "num_base_bdevs_operational": 3, 00:09:27.068 "base_bdevs_list": [ 00:09:27.068 { 00:09:27.068 "name": "pt1", 00:09:27.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.068 "is_configured": true, 00:09:27.068 "data_offset": 2048, 00:09:27.068 "data_size": 63488 00:09:27.068 }, 00:09:27.068 { 00:09:27.068 "name": null, 00:09:27.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.068 "is_configured": false, 00:09:27.068 "data_offset": 2048, 00:09:27.068 "data_size": 63488 00:09:27.068 }, 00:09:27.068 { 00:09:27.068 "name": null, 00:09:27.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.068 "is_configured": false, 00:09:27.068 "data_offset": 2048, 00:09:27.068 "data_size": 63488 00:09:27.068 } 00:09:27.068 ] 00:09:27.068 }' 00:09:27.068 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.068 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.327 [2024-11-17 01:29:35.659658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:27.327 [2024-11-17 01:29:35.659735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.327 [2024-11-17 01:29:35.659773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:27.327 [2024-11-17 01:29:35.659787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.327 [2024-11-17 01:29:35.660219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.327 [2024-11-17 01:29:35.660244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:27.327 [2024-11-17 01:29:35.660332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:27.327 [2024-11-17 01:29:35.660359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.327 pt2 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.327 [2024-11-17 01:29:35.671668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.327 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.327 "name": "raid_bdev1", 00:09:27.327 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:27.327 "strip_size_kb": 64, 00:09:27.327 "state": "configuring", 00:09:27.327 "raid_level": "raid0", 00:09:27.327 "superblock": true, 00:09:27.327 "num_base_bdevs": 3, 00:09:27.327 "num_base_bdevs_discovered": 1, 00:09:27.327 "num_base_bdevs_operational": 3, 00:09:27.327 "base_bdevs_list": [ 00:09:27.327 { 00:09:27.327 "name": "pt1", 00:09:27.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.327 "is_configured": true, 00:09:27.327 "data_offset": 2048, 00:09:27.327 "data_size": 63488 00:09:27.327 }, 00:09:27.327 { 00:09:27.327 "name": null, 00:09:27.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.327 "is_configured": false, 00:09:27.327 "data_offset": 0, 00:09:27.327 "data_size": 63488 00:09:27.327 }, 00:09:27.327 { 00:09:27.328 "name": null, 00:09:27.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.328 "is_configured": false, 00:09:27.328 "data_offset": 2048, 00:09:27.328 "data_size": 63488 00:09:27.328 } 00:09:27.328 ] 00:09:27.328 }' 00:09:27.328 01:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.328 01:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.896 [2024-11-17 01:29:36.122867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:27.896 [2024-11-17 01:29:36.122935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.896 [2024-11-17 01:29:36.122952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:27.896 [2024-11-17 01:29:36.122963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.896 [2024-11-17 01:29:36.123431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.896 [2024-11-17 01:29:36.123460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:27.896 [2024-11-17 01:29:36.123539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:27.896 [2024-11-17 01:29:36.123569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.896 pt2 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.896 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.896 [2024-11-17 01:29:36.134825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:27.896 [2024-11-17 01:29:36.134871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.896 [2024-11-17 01:29:36.134901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:27.896 [2024-11-17 01:29:36.134911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.896 [2024-11-17 01:29:36.135269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.896 [2024-11-17 01:29:36.135296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:27.896 [2024-11-17 01:29:36.135351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:27.896 [2024-11-17 01:29:36.135371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:27.896 [2024-11-17 01:29:36.135485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.896 [2024-11-17 01:29:36.135503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:27.896 [2024-11-17 01:29:36.135727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:27.896 [2024-11-17 01:29:36.135886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.896 [2024-11-17 01:29:36.135899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:27.896 [2024-11-17 01:29:36.136026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.896 pt3 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.897 "name": "raid_bdev1", 00:09:27.897 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:27.897 "strip_size_kb": 64, 00:09:27.897 "state": "online", 00:09:27.897 "raid_level": "raid0", 00:09:27.897 "superblock": true, 00:09:27.897 "num_base_bdevs": 3, 00:09:27.897 "num_base_bdevs_discovered": 3, 00:09:27.897 "num_base_bdevs_operational": 3, 00:09:27.897 "base_bdevs_list": [ 00:09:27.897 { 00:09:27.897 "name": "pt1", 00:09:27.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.897 "is_configured": true, 00:09:27.897 "data_offset": 2048, 00:09:27.897 "data_size": 63488 00:09:27.897 }, 00:09:27.897 { 00:09:27.897 "name": "pt2", 00:09:27.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.897 "is_configured": true, 00:09:27.897 "data_offset": 2048, 00:09:27.897 "data_size": 63488 00:09:27.897 }, 00:09:27.897 { 00:09:27.897 "name": "pt3", 00:09:27.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.897 "is_configured": true, 00:09:27.897 "data_offset": 2048, 00:09:27.897 "data_size": 63488 00:09:27.897 } 00:09:27.897 ] 00:09:27.897 }' 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.897 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.157 [2024-11-17 01:29:36.590358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.157 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.417 "name": "raid_bdev1", 00:09:28.417 "aliases": [ 00:09:28.417 "e29942c7-ea4a-4b17-95fa-7d26848b06e4" 00:09:28.417 ], 00:09:28.417 "product_name": "Raid Volume", 00:09:28.417 "block_size": 512, 00:09:28.417 "num_blocks": 190464, 00:09:28.417 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:28.417 "assigned_rate_limits": { 00:09:28.417 "rw_ios_per_sec": 0, 00:09:28.417 "rw_mbytes_per_sec": 0, 00:09:28.417 "r_mbytes_per_sec": 0, 00:09:28.417 "w_mbytes_per_sec": 0 00:09:28.417 }, 00:09:28.417 "claimed": false, 00:09:28.417 "zoned": false, 00:09:28.417 "supported_io_types": { 00:09:28.417 "read": true, 00:09:28.417 "write": true, 00:09:28.417 "unmap": true, 00:09:28.417 "flush": true, 00:09:28.417 "reset": true, 00:09:28.417 "nvme_admin": false, 00:09:28.417 "nvme_io": false, 00:09:28.417 "nvme_io_md": false, 00:09:28.417 "write_zeroes": true, 00:09:28.417 "zcopy": false, 00:09:28.417 "get_zone_info": false, 00:09:28.417 "zone_management": false, 00:09:28.417 "zone_append": false, 00:09:28.417 "compare": false, 00:09:28.417 "compare_and_write": false, 00:09:28.417 "abort": false, 00:09:28.417 "seek_hole": false, 00:09:28.417 "seek_data": false, 00:09:28.417 "copy": false, 00:09:28.417 "nvme_iov_md": false 00:09:28.417 }, 00:09:28.417 "memory_domains": [ 00:09:28.417 { 00:09:28.417 "dma_device_id": "system", 00:09:28.417 "dma_device_type": 1 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.417 "dma_device_type": 2 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "dma_device_id": "system", 00:09:28.417 "dma_device_type": 1 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.417 "dma_device_type": 2 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "dma_device_id": "system", 00:09:28.417 "dma_device_type": 1 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.417 "dma_device_type": 2 00:09:28.417 } 00:09:28.417 ], 00:09:28.417 "driver_specific": { 00:09:28.417 "raid": { 00:09:28.417 "uuid": "e29942c7-ea4a-4b17-95fa-7d26848b06e4", 00:09:28.417 "strip_size_kb": 64, 00:09:28.417 "state": "online", 00:09:28.417 "raid_level": "raid0", 00:09:28.417 "superblock": true, 00:09:28.417 "num_base_bdevs": 3, 00:09:28.417 "num_base_bdevs_discovered": 3, 00:09:28.417 "num_base_bdevs_operational": 3, 00:09:28.417 "base_bdevs_list": [ 00:09:28.417 { 00:09:28.417 "name": "pt1", 00:09:28.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.417 "is_configured": true, 00:09:28.417 "data_offset": 2048, 00:09:28.417 "data_size": 63488 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "name": "pt2", 00:09:28.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.417 "is_configured": true, 00:09:28.417 "data_offset": 2048, 00:09:28.417 "data_size": 63488 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "name": "pt3", 00:09:28.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.417 "is_configured": true, 00:09:28.417 "data_offset": 2048, 00:09:28.417 "data_size": 63488 00:09:28.417 } 00:09:28.417 ] 00:09:28.417 } 00:09:28.417 } 00:09:28.417 }' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:28.417 pt2 00:09:28.417 pt3' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.417 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:28.676 [2024-11-17 01:29:36.881868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e29942c7-ea4a-4b17-95fa-7d26848b06e4 '!=' e29942c7-ea4a-4b17-95fa-7d26848b06e4 ']' 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64893 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64893 ']' 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64893 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64893 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.676 killing process with pid 64893 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64893' 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64893 00:09:28.676 [2024-11-17 01:29:36.952742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.676 [2024-11-17 01:29:36.952870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.676 [2024-11-17 01:29:36.952932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.676 01:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64893 00:09:28.676 [2024-11-17 01:29:36.952944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:28.935 [2024-11-17 01:29:37.239010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.887 01:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:29.887 00:09:29.887 real 0m5.100s 00:09:29.887 user 0m7.364s 00:09:29.887 sys 0m0.822s 00:09:29.887 01:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.887 01:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.887 ************************************ 00:09:29.887 END TEST raid_superblock_test 00:09:29.887 ************************************ 00:09:30.146 01:29:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:30.146 01:29:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:30.146 01:29:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.146 01:29:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.146 ************************************ 00:09:30.146 START TEST raid_read_error_test 00:09:30.146 ************************************ 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:30.146 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PmBnEXJD79 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65146 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65146 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65146 ']' 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.147 01:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.147 [2024-11-17 01:29:38.460121] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:30.147 [2024-11-17 01:29:38.460237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65146 ] 00:09:30.406 [2024-11-17 01:29:38.630475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.406 [2024-11-17 01:29:38.741237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.665 [2024-11-17 01:29:38.927837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.665 [2024-11-17 01:29:38.927881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.925 BaseBdev1_malloc 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.925 true 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.925 [2024-11-17 01:29:39.353524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:30.925 [2024-11-17 01:29:39.353579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.925 [2024-11-17 01:29:39.353614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:30.925 [2024-11-17 01:29:39.353625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.925 [2024-11-17 01:29:39.355651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.925 [2024-11-17 01:29:39.355693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:30.925 BaseBdev1 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.925 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.184 BaseBdev2_malloc 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.184 true 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.184 [2024-11-17 01:29:39.414847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:31.184 [2024-11-17 01:29:39.414899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.184 [2024-11-17 01:29:39.414929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:31.184 [2024-11-17 01:29:39.414940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.184 [2024-11-17 01:29:39.416889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.184 [2024-11-17 01:29:39.416925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:31.184 BaseBdev2 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.184 BaseBdev3_malloc 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.184 true 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.184 [2024-11-17 01:29:39.491690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:31.184 [2024-11-17 01:29:39.491743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.184 [2024-11-17 01:29:39.491787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:31.184 [2024-11-17 01:29:39.491799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.184 [2024-11-17 01:29:39.493839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.184 [2024-11-17 01:29:39.493875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:31.184 BaseBdev3 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.184 [2024-11-17 01:29:39.503741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.184 [2024-11-17 01:29:39.505449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.184 [2024-11-17 01:29:39.505528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.184 [2024-11-17 01:29:39.505710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:31.184 [2024-11-17 01:29:39.505729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.184 [2024-11-17 01:29:39.505985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:31.184 [2024-11-17 01:29:39.506151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:31.184 [2024-11-17 01:29:39.506172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:31.184 [2024-11-17 01:29:39.506317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.184 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.185 "name": "raid_bdev1", 00:09:31.185 "uuid": "7a82e772-33f2-4eff-a481-e0a42d72c240", 00:09:31.185 "strip_size_kb": 64, 00:09:31.185 "state": "online", 00:09:31.185 "raid_level": "raid0", 00:09:31.185 "superblock": true, 00:09:31.185 "num_base_bdevs": 3, 00:09:31.185 "num_base_bdevs_discovered": 3, 00:09:31.185 "num_base_bdevs_operational": 3, 00:09:31.185 "base_bdevs_list": [ 00:09:31.185 { 00:09:31.185 "name": "BaseBdev1", 00:09:31.185 "uuid": "a4b6fb62-5e9b-5711-91b6-18f6bddf714f", 00:09:31.185 "is_configured": true, 00:09:31.185 "data_offset": 2048, 00:09:31.185 "data_size": 63488 00:09:31.185 }, 00:09:31.185 { 00:09:31.185 "name": "BaseBdev2", 00:09:31.185 "uuid": "b9aef1cb-2087-5467-b722-020793e13063", 00:09:31.185 "is_configured": true, 00:09:31.185 "data_offset": 2048, 00:09:31.185 "data_size": 63488 00:09:31.185 }, 00:09:31.185 { 00:09:31.185 "name": "BaseBdev3", 00:09:31.185 "uuid": "58f3abcc-4daf-59f6-b5ac-33242933d84e", 00:09:31.185 "is_configured": true, 00:09:31.185 "data_offset": 2048, 00:09:31.185 "data_size": 63488 00:09:31.185 } 00:09:31.185 ] 00:09:31.185 }' 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.185 01:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.445 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:31.445 01:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:31.704 [2024-11-17 01:29:39.944168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.643 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.643 "name": "raid_bdev1", 00:09:32.643 "uuid": "7a82e772-33f2-4eff-a481-e0a42d72c240", 00:09:32.643 "strip_size_kb": 64, 00:09:32.644 "state": "online", 00:09:32.644 "raid_level": "raid0", 00:09:32.644 "superblock": true, 00:09:32.644 "num_base_bdevs": 3, 00:09:32.644 "num_base_bdevs_discovered": 3, 00:09:32.644 "num_base_bdevs_operational": 3, 00:09:32.644 "base_bdevs_list": [ 00:09:32.644 { 00:09:32.644 "name": "BaseBdev1", 00:09:32.644 "uuid": "a4b6fb62-5e9b-5711-91b6-18f6bddf714f", 00:09:32.644 "is_configured": true, 00:09:32.644 "data_offset": 2048, 00:09:32.644 "data_size": 63488 00:09:32.644 }, 00:09:32.644 { 00:09:32.644 "name": "BaseBdev2", 00:09:32.644 "uuid": "b9aef1cb-2087-5467-b722-020793e13063", 00:09:32.644 "is_configured": true, 00:09:32.644 "data_offset": 2048, 00:09:32.644 "data_size": 63488 00:09:32.644 }, 00:09:32.644 { 00:09:32.644 "name": "BaseBdev3", 00:09:32.644 "uuid": "58f3abcc-4daf-59f6-b5ac-33242933d84e", 00:09:32.644 "is_configured": true, 00:09:32.644 "data_offset": 2048, 00:09:32.644 "data_size": 63488 00:09:32.644 } 00:09:32.644 ] 00:09:32.644 }' 00:09:32.644 01:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.644 01:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.903 [2024-11-17 01:29:41.347915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.903 [2024-11-17 01:29:41.347964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.903 [2024-11-17 01:29:41.350576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.903 [2024-11-17 01:29:41.350622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.903 [2024-11-17 01:29:41.350659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.903 [2024-11-17 01:29:41.350669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:32.903 { 00:09:32.903 "results": [ 00:09:32.903 { 00:09:32.903 "job": "raid_bdev1", 00:09:32.903 "core_mask": "0x1", 00:09:32.903 "workload": "randrw", 00:09:32.903 "percentage": 50, 00:09:32.903 "status": "finished", 00:09:32.903 "queue_depth": 1, 00:09:32.903 "io_size": 131072, 00:09:32.903 "runtime": 1.404783, 00:09:32.903 "iops": 16527.819599183647, 00:09:32.903 "mibps": 2065.977449897956, 00:09:32.903 "io_failed": 1, 00:09:32.903 "io_timeout": 0, 00:09:32.903 "avg_latency_us": 84.08210968618346, 00:09:32.903 "min_latency_us": 24.482096069868994, 00:09:32.903 "max_latency_us": 1366.5257641921398 00:09:32.903 } 00:09:32.903 ], 00:09:32.903 "core_count": 1 00:09:32.903 } 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65146 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65146 ']' 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65146 00:09:32.903 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:33.162 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.162 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65146 00:09:33.162 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.162 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.162 killing process with pid 65146 00:09:33.162 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65146' 00:09:33.162 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65146 00:09:33.162 [2024-11-17 01:29:41.395713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.162 01:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65146 00:09:33.421 [2024-11-17 01:29:41.620801] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PmBnEXJD79 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:34.357 00:09:34.357 real 0m4.374s 00:09:34.357 user 0m5.166s 00:09:34.357 sys 0m0.545s 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.357 01:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.357 ************************************ 00:09:34.357 END TEST raid_read_error_test 00:09:34.357 ************************************ 00:09:34.357 01:29:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:34.357 01:29:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:34.357 01:29:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.357 01:29:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.357 ************************************ 00:09:34.357 START TEST raid_write_error_test 00:09:34.357 ************************************ 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:34.357 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:34.358 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lak95r9Lfn 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65286 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65286 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65286 ']' 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.618 01:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.618 [2024-11-17 01:29:42.905369] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:34.618 [2024-11-17 01:29:42.905484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65286 ] 00:09:34.618 [2024-11-17 01:29:43.075675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.877 [2024-11-17 01:29:43.186709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.136 [2024-11-17 01:29:43.379221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.136 [2024-11-17 01:29:43.379271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 BaseBdev1_malloc 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 true 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 [2024-11-17 01:29:43.785351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:35.396 [2024-11-17 01:29:43.785408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.396 [2024-11-17 01:29:43.785428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:35.396 [2024-11-17 01:29:43.785440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.396 [2024-11-17 01:29:43.787467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.396 [2024-11-17 01:29:43.787509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:35.396 BaseBdev1 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 BaseBdev2_malloc 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 true 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.396 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 [2024-11-17 01:29:43.849601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:35.396 [2024-11-17 01:29:43.849654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.396 [2024-11-17 01:29:43.849669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:35.396 [2024-11-17 01:29:43.849679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.396 [2024-11-17 01:29:43.851657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.396 [2024-11-17 01:29:43.851699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:35.396 BaseBdev2 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.655 BaseBdev3_malloc 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.655 true 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.655 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.655 [2024-11-17 01:29:43.923272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:35.655 [2024-11-17 01:29:43.923325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.655 [2024-11-17 01:29:43.923357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:35.655 [2024-11-17 01:29:43.923367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.656 [2024-11-17 01:29:43.925298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.656 [2024-11-17 01:29:43.925334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:35.656 BaseBdev3 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.656 [2024-11-17 01:29:43.935318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.656 [2024-11-17 01:29:43.937054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.656 [2024-11-17 01:29:43.937132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.656 [2024-11-17 01:29:43.937308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:35.656 [2024-11-17 01:29:43.937321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.656 [2024-11-17 01:29:43.937544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:35.656 [2024-11-17 01:29:43.937697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:35.656 [2024-11-17 01:29:43.937715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:35.656 [2024-11-17 01:29:43.937859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.656 "name": "raid_bdev1", 00:09:35.656 "uuid": "450065d5-bcc2-45f4-b709-f1062515a50f", 00:09:35.656 "strip_size_kb": 64, 00:09:35.656 "state": "online", 00:09:35.656 "raid_level": "raid0", 00:09:35.656 "superblock": true, 00:09:35.656 "num_base_bdevs": 3, 00:09:35.656 "num_base_bdevs_discovered": 3, 00:09:35.656 "num_base_bdevs_operational": 3, 00:09:35.656 "base_bdevs_list": [ 00:09:35.656 { 00:09:35.656 "name": "BaseBdev1", 00:09:35.656 "uuid": "6f232e48-0f4c-53d5-9955-b01b9d3bbd0b", 00:09:35.656 "is_configured": true, 00:09:35.656 "data_offset": 2048, 00:09:35.656 "data_size": 63488 00:09:35.656 }, 00:09:35.656 { 00:09:35.656 "name": "BaseBdev2", 00:09:35.656 "uuid": "0c3e074c-38c7-5526-8329-e6645f47dd34", 00:09:35.656 "is_configured": true, 00:09:35.656 "data_offset": 2048, 00:09:35.656 "data_size": 63488 00:09:35.656 }, 00:09:35.656 { 00:09:35.656 "name": "BaseBdev3", 00:09:35.656 "uuid": "b303047d-3f34-5c23-93e9-6f30a6f3d963", 00:09:35.656 "is_configured": true, 00:09:35.656 "data_offset": 2048, 00:09:35.656 "data_size": 63488 00:09:35.656 } 00:09:35.656 ] 00:09:35.656 }' 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.656 01:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.959 01:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.959 01:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:36.240 [2024-11-17 01:29:44.447801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.177 "name": "raid_bdev1", 00:09:37.177 "uuid": "450065d5-bcc2-45f4-b709-f1062515a50f", 00:09:37.177 "strip_size_kb": 64, 00:09:37.177 "state": "online", 00:09:37.177 "raid_level": "raid0", 00:09:37.177 "superblock": true, 00:09:37.177 "num_base_bdevs": 3, 00:09:37.177 "num_base_bdevs_discovered": 3, 00:09:37.177 "num_base_bdevs_operational": 3, 00:09:37.177 "base_bdevs_list": [ 00:09:37.177 { 00:09:37.177 "name": "BaseBdev1", 00:09:37.177 "uuid": "6f232e48-0f4c-53d5-9955-b01b9d3bbd0b", 00:09:37.177 "is_configured": true, 00:09:37.177 "data_offset": 2048, 00:09:37.177 "data_size": 63488 00:09:37.177 }, 00:09:37.177 { 00:09:37.177 "name": "BaseBdev2", 00:09:37.177 "uuid": "0c3e074c-38c7-5526-8329-e6645f47dd34", 00:09:37.177 "is_configured": true, 00:09:37.177 "data_offset": 2048, 00:09:37.177 "data_size": 63488 00:09:37.177 }, 00:09:37.177 { 00:09:37.177 "name": "BaseBdev3", 00:09:37.177 "uuid": "b303047d-3f34-5c23-93e9-6f30a6f3d963", 00:09:37.177 "is_configured": true, 00:09:37.177 "data_offset": 2048, 00:09:37.177 "data_size": 63488 00:09:37.177 } 00:09:37.177 ] 00:09:37.177 }' 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.177 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.437 [2024-11-17 01:29:45.857573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.437 [2024-11-17 01:29:45.857608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.437 [2024-11-17 01:29:45.860187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.437 [2024-11-17 01:29:45.860234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.437 [2024-11-17 01:29:45.860284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.437 [2024-11-17 01:29:45.860293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:37.437 { 00:09:37.437 "results": [ 00:09:37.437 { 00:09:37.437 "job": "raid_bdev1", 00:09:37.437 "core_mask": "0x1", 00:09:37.437 "workload": "randrw", 00:09:37.437 "percentage": 50, 00:09:37.437 "status": "finished", 00:09:37.437 "queue_depth": 1, 00:09:37.437 "io_size": 131072, 00:09:37.437 "runtime": 1.410829, 00:09:37.437 "iops": 16308.142234104913, 00:09:37.437 "mibps": 2038.517779263114, 00:09:37.437 "io_failed": 1, 00:09:37.437 "io_timeout": 0, 00:09:37.437 "avg_latency_us": 85.07953641075706, 00:09:37.437 "min_latency_us": 18.78078602620087, 00:09:37.437 "max_latency_us": 1452.380786026201 00:09:37.437 } 00:09:37.437 ], 00:09:37.437 "core_count": 1 00:09:37.437 } 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65286 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65286 ']' 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65286 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.437 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65286 00:09:37.697 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.697 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.697 killing process with pid 65286 00:09:37.697 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65286' 00:09:37.697 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65286 00:09:37.697 [2024-11-17 01:29:45.909414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.697 01:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65286 00:09:37.697 [2024-11-17 01:29:46.132234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lak95r9Lfn 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:39.076 00:09:39.076 real 0m4.476s 00:09:39.076 user 0m5.319s 00:09:39.076 sys 0m0.568s 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.076 01:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.076 ************************************ 00:09:39.076 END TEST raid_write_error_test 00:09:39.076 ************************************ 00:09:39.076 01:29:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:39.076 01:29:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:39.076 01:29:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:39.076 01:29:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.076 01:29:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.076 ************************************ 00:09:39.076 START TEST raid_state_function_test 00:09:39.076 ************************************ 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:39.076 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65430 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65430' 00:09:39.077 Process raid pid: 65430 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65430 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65430 ']' 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.077 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.077 [2024-11-17 01:29:47.442550] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:39.077 [2024-11-17 01:29:47.442696] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.336 [2024-11-17 01:29:47.617010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.336 [2024-11-17 01:29:47.730586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.595 [2024-11-17 01:29:47.931808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.595 [2024-11-17 01:29:47.931854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.854 [2024-11-17 01:29:48.254117] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.854 [2024-11-17 01:29:48.254172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.854 [2024-11-17 01:29:48.254182] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.854 [2024-11-17 01:29:48.254191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.854 [2024-11-17 01:29:48.254197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.854 [2024-11-17 01:29:48.254206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.854 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.855 "name": "Existed_Raid", 00:09:39.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.855 "strip_size_kb": 64, 00:09:39.855 "state": "configuring", 00:09:39.855 "raid_level": "concat", 00:09:39.855 "superblock": false, 00:09:39.855 "num_base_bdevs": 3, 00:09:39.855 "num_base_bdevs_discovered": 0, 00:09:39.855 "num_base_bdevs_operational": 3, 00:09:39.855 "base_bdevs_list": [ 00:09:39.855 { 00:09:39.855 "name": "BaseBdev1", 00:09:39.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.855 "is_configured": false, 00:09:39.855 "data_offset": 0, 00:09:39.855 "data_size": 0 00:09:39.855 }, 00:09:39.855 { 00:09:39.855 "name": "BaseBdev2", 00:09:39.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.855 "is_configured": false, 00:09:39.855 "data_offset": 0, 00:09:39.855 "data_size": 0 00:09:39.855 }, 00:09:39.855 { 00:09:39.855 "name": "BaseBdev3", 00:09:39.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.855 "is_configured": false, 00:09:39.855 "data_offset": 0, 00:09:39.855 "data_size": 0 00:09:39.855 } 00:09:39.855 ] 00:09:39.855 }' 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.855 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 [2024-11-17 01:29:48.693315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.424 [2024-11-17 01:29:48.693355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 [2024-11-17 01:29:48.705290] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.424 [2024-11-17 01:29:48.705337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.424 [2024-11-17 01:29:48.705346] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.424 [2024-11-17 01:29:48.705355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.424 [2024-11-17 01:29:48.705360] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.424 [2024-11-17 01:29:48.705369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 [2024-11-17 01:29:48.753625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.424 BaseBdev1 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.424 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 [ 00:09:40.424 { 00:09:40.424 "name": "BaseBdev1", 00:09:40.424 "aliases": [ 00:09:40.424 "74b2be7c-3883-4313-a122-e5c6ac23c8ac" 00:09:40.424 ], 00:09:40.424 "product_name": "Malloc disk", 00:09:40.424 "block_size": 512, 00:09:40.424 "num_blocks": 65536, 00:09:40.424 "uuid": "74b2be7c-3883-4313-a122-e5c6ac23c8ac", 00:09:40.424 "assigned_rate_limits": { 00:09:40.424 "rw_ios_per_sec": 0, 00:09:40.424 "rw_mbytes_per_sec": 0, 00:09:40.424 "r_mbytes_per_sec": 0, 00:09:40.424 "w_mbytes_per_sec": 0 00:09:40.424 }, 00:09:40.424 "claimed": true, 00:09:40.424 "claim_type": "exclusive_write", 00:09:40.424 "zoned": false, 00:09:40.424 "supported_io_types": { 00:09:40.424 "read": true, 00:09:40.424 "write": true, 00:09:40.424 "unmap": true, 00:09:40.424 "flush": true, 00:09:40.424 "reset": true, 00:09:40.424 "nvme_admin": false, 00:09:40.424 "nvme_io": false, 00:09:40.424 "nvme_io_md": false, 00:09:40.424 "write_zeroes": true, 00:09:40.424 "zcopy": true, 00:09:40.424 "get_zone_info": false, 00:09:40.424 "zone_management": false, 00:09:40.424 "zone_append": false, 00:09:40.424 "compare": false, 00:09:40.424 "compare_and_write": false, 00:09:40.425 "abort": true, 00:09:40.425 "seek_hole": false, 00:09:40.425 "seek_data": false, 00:09:40.425 "copy": true, 00:09:40.425 "nvme_iov_md": false 00:09:40.425 }, 00:09:40.425 "memory_domains": [ 00:09:40.425 { 00:09:40.425 "dma_device_id": "system", 00:09:40.425 "dma_device_type": 1 00:09:40.425 }, 00:09:40.425 { 00:09:40.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.425 "dma_device_type": 2 00:09:40.425 } 00:09:40.425 ], 00:09:40.425 "driver_specific": {} 00:09:40.425 } 00:09:40.425 ] 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.425 "name": "Existed_Raid", 00:09:40.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.425 "strip_size_kb": 64, 00:09:40.425 "state": "configuring", 00:09:40.425 "raid_level": "concat", 00:09:40.425 "superblock": false, 00:09:40.425 "num_base_bdevs": 3, 00:09:40.425 "num_base_bdevs_discovered": 1, 00:09:40.425 "num_base_bdevs_operational": 3, 00:09:40.425 "base_bdevs_list": [ 00:09:40.425 { 00:09:40.425 "name": "BaseBdev1", 00:09:40.425 "uuid": "74b2be7c-3883-4313-a122-e5c6ac23c8ac", 00:09:40.425 "is_configured": true, 00:09:40.425 "data_offset": 0, 00:09:40.425 "data_size": 65536 00:09:40.425 }, 00:09:40.425 { 00:09:40.425 "name": "BaseBdev2", 00:09:40.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.425 "is_configured": false, 00:09:40.425 "data_offset": 0, 00:09:40.425 "data_size": 0 00:09:40.425 }, 00:09:40.425 { 00:09:40.425 "name": "BaseBdev3", 00:09:40.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.425 "is_configured": false, 00:09:40.425 "data_offset": 0, 00:09:40.425 "data_size": 0 00:09:40.425 } 00:09:40.425 ] 00:09:40.425 }' 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.425 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.993 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.993 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.993 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.993 [2024-11-17 01:29:49.212905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.993 [2024-11-17 01:29:49.213020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.993 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.994 [2024-11-17 01:29:49.224926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.994 [2024-11-17 01:29:49.226655] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.994 [2024-11-17 01:29:49.226700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.994 [2024-11-17 01:29:49.226710] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.994 [2024-11-17 01:29:49.226719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.994 "name": "Existed_Raid", 00:09:40.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.994 "strip_size_kb": 64, 00:09:40.994 "state": "configuring", 00:09:40.994 "raid_level": "concat", 00:09:40.994 "superblock": false, 00:09:40.994 "num_base_bdevs": 3, 00:09:40.994 "num_base_bdevs_discovered": 1, 00:09:40.994 "num_base_bdevs_operational": 3, 00:09:40.994 "base_bdevs_list": [ 00:09:40.994 { 00:09:40.994 "name": "BaseBdev1", 00:09:40.994 "uuid": "74b2be7c-3883-4313-a122-e5c6ac23c8ac", 00:09:40.994 "is_configured": true, 00:09:40.994 "data_offset": 0, 00:09:40.994 "data_size": 65536 00:09:40.994 }, 00:09:40.994 { 00:09:40.994 "name": "BaseBdev2", 00:09:40.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.994 "is_configured": false, 00:09:40.994 "data_offset": 0, 00:09:40.994 "data_size": 0 00:09:40.994 }, 00:09:40.994 { 00:09:40.994 "name": "BaseBdev3", 00:09:40.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.994 "is_configured": false, 00:09:40.994 "data_offset": 0, 00:09:40.994 "data_size": 0 00:09:40.994 } 00:09:40.994 ] 00:09:40.994 }' 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.994 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.254 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.254 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.254 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.513 [2024-11-17 01:29:49.727577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.513 BaseBdev2 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.513 [ 00:09:41.513 { 00:09:41.513 "name": "BaseBdev2", 00:09:41.513 "aliases": [ 00:09:41.513 "0ab62a61-fd44-4ec2-a76d-1a892645b462" 00:09:41.513 ], 00:09:41.513 "product_name": "Malloc disk", 00:09:41.513 "block_size": 512, 00:09:41.513 "num_blocks": 65536, 00:09:41.513 "uuid": "0ab62a61-fd44-4ec2-a76d-1a892645b462", 00:09:41.513 "assigned_rate_limits": { 00:09:41.513 "rw_ios_per_sec": 0, 00:09:41.513 "rw_mbytes_per_sec": 0, 00:09:41.513 "r_mbytes_per_sec": 0, 00:09:41.513 "w_mbytes_per_sec": 0 00:09:41.513 }, 00:09:41.513 "claimed": true, 00:09:41.513 "claim_type": "exclusive_write", 00:09:41.513 "zoned": false, 00:09:41.513 "supported_io_types": { 00:09:41.513 "read": true, 00:09:41.513 "write": true, 00:09:41.513 "unmap": true, 00:09:41.513 "flush": true, 00:09:41.513 "reset": true, 00:09:41.513 "nvme_admin": false, 00:09:41.513 "nvme_io": false, 00:09:41.513 "nvme_io_md": false, 00:09:41.513 "write_zeroes": true, 00:09:41.513 "zcopy": true, 00:09:41.513 "get_zone_info": false, 00:09:41.513 "zone_management": false, 00:09:41.513 "zone_append": false, 00:09:41.513 "compare": false, 00:09:41.513 "compare_and_write": false, 00:09:41.513 "abort": true, 00:09:41.513 "seek_hole": false, 00:09:41.513 "seek_data": false, 00:09:41.513 "copy": true, 00:09:41.513 "nvme_iov_md": false 00:09:41.513 }, 00:09:41.513 "memory_domains": [ 00:09:41.513 { 00:09:41.513 "dma_device_id": "system", 00:09:41.513 "dma_device_type": 1 00:09:41.513 }, 00:09:41.513 { 00:09:41.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.513 "dma_device_type": 2 00:09:41.513 } 00:09:41.513 ], 00:09:41.513 "driver_specific": {} 00:09:41.513 } 00:09:41.513 ] 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.513 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.513 "name": "Existed_Raid", 00:09:41.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.513 "strip_size_kb": 64, 00:09:41.513 "state": "configuring", 00:09:41.513 "raid_level": "concat", 00:09:41.513 "superblock": false, 00:09:41.513 "num_base_bdevs": 3, 00:09:41.513 "num_base_bdevs_discovered": 2, 00:09:41.513 "num_base_bdevs_operational": 3, 00:09:41.513 "base_bdevs_list": [ 00:09:41.513 { 00:09:41.513 "name": "BaseBdev1", 00:09:41.513 "uuid": "74b2be7c-3883-4313-a122-e5c6ac23c8ac", 00:09:41.513 "is_configured": true, 00:09:41.513 "data_offset": 0, 00:09:41.513 "data_size": 65536 00:09:41.513 }, 00:09:41.513 { 00:09:41.514 "name": "BaseBdev2", 00:09:41.514 "uuid": "0ab62a61-fd44-4ec2-a76d-1a892645b462", 00:09:41.514 "is_configured": true, 00:09:41.514 "data_offset": 0, 00:09:41.514 "data_size": 65536 00:09:41.514 }, 00:09:41.514 { 00:09:41.514 "name": "BaseBdev3", 00:09:41.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.514 "is_configured": false, 00:09:41.514 "data_offset": 0, 00:09:41.514 "data_size": 0 00:09:41.514 } 00:09:41.514 ] 00:09:41.514 }' 00:09:41.514 01:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.514 01:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.772 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.772 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.772 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.034 [2024-11-17 01:29:50.264838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.034 [2024-11-17 01:29:50.264970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:42.034 [2024-11-17 01:29:50.264986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:42.034 [2024-11-17 01:29:50.265257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:42.034 [2024-11-17 01:29:50.265421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:42.034 [2024-11-17 01:29:50.265430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:42.034 [2024-11-17 01:29:50.265699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.034 BaseBdev3 00:09:42.034 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.034 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:42.034 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:42.034 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.034 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.035 [ 00:09:42.035 { 00:09:42.035 "name": "BaseBdev3", 00:09:42.035 "aliases": [ 00:09:42.035 "c2e2462b-8eb1-4cc9-9e05-48c16aae1ece" 00:09:42.035 ], 00:09:42.035 "product_name": "Malloc disk", 00:09:42.035 "block_size": 512, 00:09:42.035 "num_blocks": 65536, 00:09:42.035 "uuid": "c2e2462b-8eb1-4cc9-9e05-48c16aae1ece", 00:09:42.035 "assigned_rate_limits": { 00:09:42.035 "rw_ios_per_sec": 0, 00:09:42.035 "rw_mbytes_per_sec": 0, 00:09:42.035 "r_mbytes_per_sec": 0, 00:09:42.035 "w_mbytes_per_sec": 0 00:09:42.035 }, 00:09:42.035 "claimed": true, 00:09:42.035 "claim_type": "exclusive_write", 00:09:42.035 "zoned": false, 00:09:42.035 "supported_io_types": { 00:09:42.035 "read": true, 00:09:42.035 "write": true, 00:09:42.035 "unmap": true, 00:09:42.035 "flush": true, 00:09:42.035 "reset": true, 00:09:42.035 "nvme_admin": false, 00:09:42.035 "nvme_io": false, 00:09:42.035 "nvme_io_md": false, 00:09:42.035 "write_zeroes": true, 00:09:42.035 "zcopy": true, 00:09:42.035 "get_zone_info": false, 00:09:42.035 "zone_management": false, 00:09:42.035 "zone_append": false, 00:09:42.035 "compare": false, 00:09:42.035 "compare_and_write": false, 00:09:42.035 "abort": true, 00:09:42.035 "seek_hole": false, 00:09:42.035 "seek_data": false, 00:09:42.035 "copy": true, 00:09:42.035 "nvme_iov_md": false 00:09:42.035 }, 00:09:42.035 "memory_domains": [ 00:09:42.035 { 00:09:42.035 "dma_device_id": "system", 00:09:42.035 "dma_device_type": 1 00:09:42.035 }, 00:09:42.035 { 00:09:42.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.035 "dma_device_type": 2 00:09:42.035 } 00:09:42.035 ], 00:09:42.035 "driver_specific": {} 00:09:42.035 } 00:09:42.035 ] 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.035 "name": "Existed_Raid", 00:09:42.035 "uuid": "65959a9e-b090-4b03-909a-d45841eb2436", 00:09:42.035 "strip_size_kb": 64, 00:09:42.035 "state": "online", 00:09:42.035 "raid_level": "concat", 00:09:42.035 "superblock": false, 00:09:42.035 "num_base_bdevs": 3, 00:09:42.035 "num_base_bdevs_discovered": 3, 00:09:42.035 "num_base_bdevs_operational": 3, 00:09:42.035 "base_bdevs_list": [ 00:09:42.035 { 00:09:42.035 "name": "BaseBdev1", 00:09:42.035 "uuid": "74b2be7c-3883-4313-a122-e5c6ac23c8ac", 00:09:42.035 "is_configured": true, 00:09:42.035 "data_offset": 0, 00:09:42.035 "data_size": 65536 00:09:42.035 }, 00:09:42.035 { 00:09:42.035 "name": "BaseBdev2", 00:09:42.035 "uuid": "0ab62a61-fd44-4ec2-a76d-1a892645b462", 00:09:42.035 "is_configured": true, 00:09:42.035 "data_offset": 0, 00:09:42.035 "data_size": 65536 00:09:42.035 }, 00:09:42.035 { 00:09:42.035 "name": "BaseBdev3", 00:09:42.035 "uuid": "c2e2462b-8eb1-4cc9-9e05-48c16aae1ece", 00:09:42.035 "is_configured": true, 00:09:42.035 "data_offset": 0, 00:09:42.035 "data_size": 65536 00:09:42.035 } 00:09:42.035 ] 00:09:42.035 }' 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.035 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.294 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.294 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.295 [2024-11-17 01:29:50.688444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.295 "name": "Existed_Raid", 00:09:42.295 "aliases": [ 00:09:42.295 "65959a9e-b090-4b03-909a-d45841eb2436" 00:09:42.295 ], 00:09:42.295 "product_name": "Raid Volume", 00:09:42.295 "block_size": 512, 00:09:42.295 "num_blocks": 196608, 00:09:42.295 "uuid": "65959a9e-b090-4b03-909a-d45841eb2436", 00:09:42.295 "assigned_rate_limits": { 00:09:42.295 "rw_ios_per_sec": 0, 00:09:42.295 "rw_mbytes_per_sec": 0, 00:09:42.295 "r_mbytes_per_sec": 0, 00:09:42.295 "w_mbytes_per_sec": 0 00:09:42.295 }, 00:09:42.295 "claimed": false, 00:09:42.295 "zoned": false, 00:09:42.295 "supported_io_types": { 00:09:42.295 "read": true, 00:09:42.295 "write": true, 00:09:42.295 "unmap": true, 00:09:42.295 "flush": true, 00:09:42.295 "reset": true, 00:09:42.295 "nvme_admin": false, 00:09:42.295 "nvme_io": false, 00:09:42.295 "nvme_io_md": false, 00:09:42.295 "write_zeroes": true, 00:09:42.295 "zcopy": false, 00:09:42.295 "get_zone_info": false, 00:09:42.295 "zone_management": false, 00:09:42.295 "zone_append": false, 00:09:42.295 "compare": false, 00:09:42.295 "compare_and_write": false, 00:09:42.295 "abort": false, 00:09:42.295 "seek_hole": false, 00:09:42.295 "seek_data": false, 00:09:42.295 "copy": false, 00:09:42.295 "nvme_iov_md": false 00:09:42.295 }, 00:09:42.295 "memory_domains": [ 00:09:42.295 { 00:09:42.295 "dma_device_id": "system", 00:09:42.295 "dma_device_type": 1 00:09:42.295 }, 00:09:42.295 { 00:09:42.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.295 "dma_device_type": 2 00:09:42.295 }, 00:09:42.295 { 00:09:42.295 "dma_device_id": "system", 00:09:42.295 "dma_device_type": 1 00:09:42.295 }, 00:09:42.295 { 00:09:42.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.295 "dma_device_type": 2 00:09:42.295 }, 00:09:42.295 { 00:09:42.295 "dma_device_id": "system", 00:09:42.295 "dma_device_type": 1 00:09:42.295 }, 00:09:42.295 { 00:09:42.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.295 "dma_device_type": 2 00:09:42.295 } 00:09:42.295 ], 00:09:42.295 "driver_specific": { 00:09:42.295 "raid": { 00:09:42.295 "uuid": "65959a9e-b090-4b03-909a-d45841eb2436", 00:09:42.295 "strip_size_kb": 64, 00:09:42.295 "state": "online", 00:09:42.295 "raid_level": "concat", 00:09:42.295 "superblock": false, 00:09:42.295 "num_base_bdevs": 3, 00:09:42.295 "num_base_bdevs_discovered": 3, 00:09:42.295 "num_base_bdevs_operational": 3, 00:09:42.295 "base_bdevs_list": [ 00:09:42.295 { 00:09:42.295 "name": "BaseBdev1", 00:09:42.295 "uuid": "74b2be7c-3883-4313-a122-e5c6ac23c8ac", 00:09:42.295 "is_configured": true, 00:09:42.295 "data_offset": 0, 00:09:42.295 "data_size": 65536 00:09:42.295 }, 00:09:42.295 { 00:09:42.295 "name": "BaseBdev2", 00:09:42.295 "uuid": "0ab62a61-fd44-4ec2-a76d-1a892645b462", 00:09:42.295 "is_configured": true, 00:09:42.295 "data_offset": 0, 00:09:42.295 "data_size": 65536 00:09:42.295 }, 00:09:42.295 { 00:09:42.295 "name": "BaseBdev3", 00:09:42.295 "uuid": "c2e2462b-8eb1-4cc9-9e05-48c16aae1ece", 00:09:42.295 "is_configured": true, 00:09:42.295 "data_offset": 0, 00:09:42.295 "data_size": 65536 00:09:42.295 } 00:09:42.295 ] 00:09:42.295 } 00:09:42.295 } 00:09:42.295 }' 00:09:42.295 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:42.555 BaseBdev2 00:09:42.555 BaseBdev3' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.555 01:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.555 [2024-11-17 01:29:50.975704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.555 [2024-11-17 01:29:50.975805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.555 [2024-11-17 01:29:50.975887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.814 "name": "Existed_Raid", 00:09:42.814 "uuid": "65959a9e-b090-4b03-909a-d45841eb2436", 00:09:42.814 "strip_size_kb": 64, 00:09:42.814 "state": "offline", 00:09:42.814 "raid_level": "concat", 00:09:42.814 "superblock": false, 00:09:42.814 "num_base_bdevs": 3, 00:09:42.814 "num_base_bdevs_discovered": 2, 00:09:42.814 "num_base_bdevs_operational": 2, 00:09:42.814 "base_bdevs_list": [ 00:09:42.814 { 00:09:42.814 "name": null, 00:09:42.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.814 "is_configured": false, 00:09:42.814 "data_offset": 0, 00:09:42.814 "data_size": 65536 00:09:42.814 }, 00:09:42.814 { 00:09:42.814 "name": "BaseBdev2", 00:09:42.814 "uuid": "0ab62a61-fd44-4ec2-a76d-1a892645b462", 00:09:42.814 "is_configured": true, 00:09:42.814 "data_offset": 0, 00:09:42.814 "data_size": 65536 00:09:42.814 }, 00:09:42.814 { 00:09:42.814 "name": "BaseBdev3", 00:09:42.814 "uuid": "c2e2462b-8eb1-4cc9-9e05-48c16aae1ece", 00:09:42.814 "is_configured": true, 00:09:42.814 "data_offset": 0, 00:09:42.814 "data_size": 65536 00:09:42.814 } 00:09:42.814 ] 00:09:42.814 }' 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.814 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.073 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:43.073 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.073 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.073 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.073 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.073 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.333 [2024-11-17 01:29:51.579313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.333 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.333 [2024-11-17 01:29:51.729732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.333 [2024-11-17 01:29:51.729837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:43.593 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.593 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.593 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.593 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 BaseBdev2 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 [ 00:09:43.594 { 00:09:43.594 "name": "BaseBdev2", 00:09:43.594 "aliases": [ 00:09:43.594 "cab1b465-ce7f-4e95-96db-1d8205098ced" 00:09:43.594 ], 00:09:43.594 "product_name": "Malloc disk", 00:09:43.594 "block_size": 512, 00:09:43.594 "num_blocks": 65536, 00:09:43.594 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:43.594 "assigned_rate_limits": { 00:09:43.594 "rw_ios_per_sec": 0, 00:09:43.594 "rw_mbytes_per_sec": 0, 00:09:43.594 "r_mbytes_per_sec": 0, 00:09:43.594 "w_mbytes_per_sec": 0 00:09:43.594 }, 00:09:43.594 "claimed": false, 00:09:43.594 "zoned": false, 00:09:43.594 "supported_io_types": { 00:09:43.594 "read": true, 00:09:43.594 "write": true, 00:09:43.594 "unmap": true, 00:09:43.594 "flush": true, 00:09:43.594 "reset": true, 00:09:43.594 "nvme_admin": false, 00:09:43.594 "nvme_io": false, 00:09:43.594 "nvme_io_md": false, 00:09:43.594 "write_zeroes": true, 00:09:43.594 "zcopy": true, 00:09:43.594 "get_zone_info": false, 00:09:43.594 "zone_management": false, 00:09:43.594 "zone_append": false, 00:09:43.594 "compare": false, 00:09:43.594 "compare_and_write": false, 00:09:43.594 "abort": true, 00:09:43.594 "seek_hole": false, 00:09:43.594 "seek_data": false, 00:09:43.594 "copy": true, 00:09:43.594 "nvme_iov_md": false 00:09:43.594 }, 00:09:43.594 "memory_domains": [ 00:09:43.594 { 00:09:43.594 "dma_device_id": "system", 00:09:43.594 "dma_device_type": 1 00:09:43.594 }, 00:09:43.594 { 00:09:43.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.594 "dma_device_type": 2 00:09:43.594 } 00:09:43.594 ], 00:09:43.594 "driver_specific": {} 00:09:43.594 } 00:09:43.594 ] 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 BaseBdev3 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 [ 00:09:43.594 { 00:09:43.594 "name": "BaseBdev3", 00:09:43.594 "aliases": [ 00:09:43.594 "77088299-0284-4d75-8234-a253148f430a" 00:09:43.594 ], 00:09:43.594 "product_name": "Malloc disk", 00:09:43.594 "block_size": 512, 00:09:43.594 "num_blocks": 65536, 00:09:43.594 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:43.594 "assigned_rate_limits": { 00:09:43.594 "rw_ios_per_sec": 0, 00:09:43.594 "rw_mbytes_per_sec": 0, 00:09:43.594 "r_mbytes_per_sec": 0, 00:09:43.594 "w_mbytes_per_sec": 0 00:09:43.594 }, 00:09:43.594 "claimed": false, 00:09:43.594 "zoned": false, 00:09:43.594 "supported_io_types": { 00:09:43.594 "read": true, 00:09:43.594 "write": true, 00:09:43.594 "unmap": true, 00:09:43.594 "flush": true, 00:09:43.594 "reset": true, 00:09:43.594 "nvme_admin": false, 00:09:43.594 "nvme_io": false, 00:09:43.594 "nvme_io_md": false, 00:09:43.594 "write_zeroes": true, 00:09:43.594 "zcopy": true, 00:09:43.594 "get_zone_info": false, 00:09:43.594 "zone_management": false, 00:09:43.594 "zone_append": false, 00:09:43.594 "compare": false, 00:09:43.594 "compare_and_write": false, 00:09:43.594 "abort": true, 00:09:43.594 "seek_hole": false, 00:09:43.594 "seek_data": false, 00:09:43.594 "copy": true, 00:09:43.594 "nvme_iov_md": false 00:09:43.594 }, 00:09:43.594 "memory_domains": [ 00:09:43.594 { 00:09:43.594 "dma_device_id": "system", 00:09:43.594 "dma_device_type": 1 00:09:43.594 }, 00:09:43.594 { 00:09:43.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.594 "dma_device_type": 2 00:09:43.594 } 00:09:43.594 ], 00:09:43.594 "driver_specific": {} 00:09:43.594 } 00:09:43.594 ] 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.594 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.594 [2024-11-17 01:29:52.051666] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.594 [2024-11-17 01:29:52.051750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.594 [2024-11-17 01:29:52.051800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.855 [2024-11-17 01:29:52.053546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.855 "name": "Existed_Raid", 00:09:43.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.855 "strip_size_kb": 64, 00:09:43.855 "state": "configuring", 00:09:43.855 "raid_level": "concat", 00:09:43.855 "superblock": false, 00:09:43.855 "num_base_bdevs": 3, 00:09:43.855 "num_base_bdevs_discovered": 2, 00:09:43.855 "num_base_bdevs_operational": 3, 00:09:43.855 "base_bdevs_list": [ 00:09:43.855 { 00:09:43.855 "name": "BaseBdev1", 00:09:43.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.855 "is_configured": false, 00:09:43.855 "data_offset": 0, 00:09:43.855 "data_size": 0 00:09:43.855 }, 00:09:43.855 { 00:09:43.855 "name": "BaseBdev2", 00:09:43.855 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:43.855 "is_configured": true, 00:09:43.855 "data_offset": 0, 00:09:43.855 "data_size": 65536 00:09:43.855 }, 00:09:43.855 { 00:09:43.855 "name": "BaseBdev3", 00:09:43.855 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:43.855 "is_configured": true, 00:09:43.855 "data_offset": 0, 00:09:43.855 "data_size": 65536 00:09:43.855 } 00:09:43.855 ] 00:09:43.855 }' 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.855 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.115 [2024-11-17 01:29:52.471009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.115 "name": "Existed_Raid", 00:09:44.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.115 "strip_size_kb": 64, 00:09:44.115 "state": "configuring", 00:09:44.115 "raid_level": "concat", 00:09:44.115 "superblock": false, 00:09:44.115 "num_base_bdevs": 3, 00:09:44.115 "num_base_bdevs_discovered": 1, 00:09:44.115 "num_base_bdevs_operational": 3, 00:09:44.115 "base_bdevs_list": [ 00:09:44.115 { 00:09:44.115 "name": "BaseBdev1", 00:09:44.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.115 "is_configured": false, 00:09:44.115 "data_offset": 0, 00:09:44.115 "data_size": 0 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "name": null, 00:09:44.115 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:44.115 "is_configured": false, 00:09:44.115 "data_offset": 0, 00:09:44.115 "data_size": 65536 00:09:44.115 }, 00:09:44.115 { 00:09:44.115 "name": "BaseBdev3", 00:09:44.115 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:44.115 "is_configured": true, 00:09:44.115 "data_offset": 0, 00:09:44.115 "data_size": 65536 00:09:44.115 } 00:09:44.115 ] 00:09:44.115 }' 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.115 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.683 [2024-11-17 01:29:52.954736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.683 BaseBdev1 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.683 [ 00:09:44.683 { 00:09:44.683 "name": "BaseBdev1", 00:09:44.683 "aliases": [ 00:09:44.683 "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136" 00:09:44.683 ], 00:09:44.683 "product_name": "Malloc disk", 00:09:44.683 "block_size": 512, 00:09:44.683 "num_blocks": 65536, 00:09:44.683 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:44.683 "assigned_rate_limits": { 00:09:44.683 "rw_ios_per_sec": 0, 00:09:44.683 "rw_mbytes_per_sec": 0, 00:09:44.683 "r_mbytes_per_sec": 0, 00:09:44.683 "w_mbytes_per_sec": 0 00:09:44.683 }, 00:09:44.683 "claimed": true, 00:09:44.683 "claim_type": "exclusive_write", 00:09:44.683 "zoned": false, 00:09:44.683 "supported_io_types": { 00:09:44.683 "read": true, 00:09:44.683 "write": true, 00:09:44.683 "unmap": true, 00:09:44.683 "flush": true, 00:09:44.683 "reset": true, 00:09:44.683 "nvme_admin": false, 00:09:44.683 "nvme_io": false, 00:09:44.683 "nvme_io_md": false, 00:09:44.683 "write_zeroes": true, 00:09:44.683 "zcopy": true, 00:09:44.683 "get_zone_info": false, 00:09:44.683 "zone_management": false, 00:09:44.683 "zone_append": false, 00:09:44.683 "compare": false, 00:09:44.683 "compare_and_write": false, 00:09:44.683 "abort": true, 00:09:44.683 "seek_hole": false, 00:09:44.683 "seek_data": false, 00:09:44.683 "copy": true, 00:09:44.683 "nvme_iov_md": false 00:09:44.683 }, 00:09:44.683 "memory_domains": [ 00:09:44.683 { 00:09:44.683 "dma_device_id": "system", 00:09:44.683 "dma_device_type": 1 00:09:44.683 }, 00:09:44.683 { 00:09:44.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.683 "dma_device_type": 2 00:09:44.683 } 00:09:44.683 ], 00:09:44.683 "driver_specific": {} 00:09:44.683 } 00:09:44.683 ] 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.683 01:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.683 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.683 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.683 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.683 "name": "Existed_Raid", 00:09:44.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.683 "strip_size_kb": 64, 00:09:44.683 "state": "configuring", 00:09:44.683 "raid_level": "concat", 00:09:44.683 "superblock": false, 00:09:44.684 "num_base_bdevs": 3, 00:09:44.684 "num_base_bdevs_discovered": 2, 00:09:44.684 "num_base_bdevs_operational": 3, 00:09:44.684 "base_bdevs_list": [ 00:09:44.684 { 00:09:44.684 "name": "BaseBdev1", 00:09:44.684 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:44.684 "is_configured": true, 00:09:44.684 "data_offset": 0, 00:09:44.684 "data_size": 65536 00:09:44.684 }, 00:09:44.684 { 00:09:44.684 "name": null, 00:09:44.684 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:44.684 "is_configured": false, 00:09:44.684 "data_offset": 0, 00:09:44.684 "data_size": 65536 00:09:44.684 }, 00:09:44.684 { 00:09:44.684 "name": "BaseBdev3", 00:09:44.684 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:44.684 "is_configured": true, 00:09:44.684 "data_offset": 0, 00:09:44.684 "data_size": 65536 00:09:44.684 } 00:09:44.684 ] 00:09:44.684 }' 00:09:44.684 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.684 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.254 [2024-11-17 01:29:53.461898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.254 "name": "Existed_Raid", 00:09:45.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.254 "strip_size_kb": 64, 00:09:45.254 "state": "configuring", 00:09:45.254 "raid_level": "concat", 00:09:45.254 "superblock": false, 00:09:45.254 "num_base_bdevs": 3, 00:09:45.254 "num_base_bdevs_discovered": 1, 00:09:45.254 "num_base_bdevs_operational": 3, 00:09:45.254 "base_bdevs_list": [ 00:09:45.254 { 00:09:45.254 "name": "BaseBdev1", 00:09:45.254 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:45.254 "is_configured": true, 00:09:45.254 "data_offset": 0, 00:09:45.254 "data_size": 65536 00:09:45.254 }, 00:09:45.254 { 00:09:45.254 "name": null, 00:09:45.254 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:45.254 "is_configured": false, 00:09:45.254 "data_offset": 0, 00:09:45.254 "data_size": 65536 00:09:45.254 }, 00:09:45.254 { 00:09:45.254 "name": null, 00:09:45.254 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:45.254 "is_configured": false, 00:09:45.254 "data_offset": 0, 00:09:45.254 "data_size": 65536 00:09:45.254 } 00:09:45.254 ] 00:09:45.254 }' 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.254 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.514 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.514 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.514 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.514 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.514 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.775 [2024-11-17 01:29:53.985041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.775 01:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.775 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.775 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.775 "name": "Existed_Raid", 00:09:45.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.775 "strip_size_kb": 64, 00:09:45.775 "state": "configuring", 00:09:45.775 "raid_level": "concat", 00:09:45.775 "superblock": false, 00:09:45.775 "num_base_bdevs": 3, 00:09:45.775 "num_base_bdevs_discovered": 2, 00:09:45.775 "num_base_bdevs_operational": 3, 00:09:45.775 "base_bdevs_list": [ 00:09:45.775 { 00:09:45.775 "name": "BaseBdev1", 00:09:45.775 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:45.775 "is_configured": true, 00:09:45.775 "data_offset": 0, 00:09:45.775 "data_size": 65536 00:09:45.775 }, 00:09:45.775 { 00:09:45.775 "name": null, 00:09:45.775 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:45.775 "is_configured": false, 00:09:45.775 "data_offset": 0, 00:09:45.775 "data_size": 65536 00:09:45.775 }, 00:09:45.775 { 00:09:45.775 "name": "BaseBdev3", 00:09:45.775 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:45.775 "is_configured": true, 00:09:45.775 "data_offset": 0, 00:09:45.775 "data_size": 65536 00:09:45.775 } 00:09:45.775 ] 00:09:45.775 }' 00:09:45.775 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.775 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.035 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.035 [2024-11-17 01:29:54.444323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.295 "name": "Existed_Raid", 00:09:46.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.295 "strip_size_kb": 64, 00:09:46.295 "state": "configuring", 00:09:46.295 "raid_level": "concat", 00:09:46.295 "superblock": false, 00:09:46.295 "num_base_bdevs": 3, 00:09:46.295 "num_base_bdevs_discovered": 1, 00:09:46.295 "num_base_bdevs_operational": 3, 00:09:46.295 "base_bdevs_list": [ 00:09:46.295 { 00:09:46.295 "name": null, 00:09:46.295 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:46.295 "is_configured": false, 00:09:46.295 "data_offset": 0, 00:09:46.295 "data_size": 65536 00:09:46.295 }, 00:09:46.295 { 00:09:46.295 "name": null, 00:09:46.295 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:46.295 "is_configured": false, 00:09:46.295 "data_offset": 0, 00:09:46.295 "data_size": 65536 00:09:46.295 }, 00:09:46.295 { 00:09:46.295 "name": "BaseBdev3", 00:09:46.295 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:46.295 "is_configured": true, 00:09:46.295 "data_offset": 0, 00:09:46.295 "data_size": 65536 00:09:46.295 } 00:09:46.295 ] 00:09:46.295 }' 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.295 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.555 01:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.555 [2024-11-17 01:29:55.006217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.555 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.815 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.815 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.815 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.815 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.815 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.815 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.816 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.816 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.816 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.816 "name": "Existed_Raid", 00:09:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.816 "strip_size_kb": 64, 00:09:46.816 "state": "configuring", 00:09:46.816 "raid_level": "concat", 00:09:46.816 "superblock": false, 00:09:46.816 "num_base_bdevs": 3, 00:09:46.816 "num_base_bdevs_discovered": 2, 00:09:46.816 "num_base_bdevs_operational": 3, 00:09:46.816 "base_bdevs_list": [ 00:09:46.816 { 00:09:46.816 "name": null, 00:09:46.816 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:46.816 "is_configured": false, 00:09:46.816 "data_offset": 0, 00:09:46.816 "data_size": 65536 00:09:46.816 }, 00:09:46.816 { 00:09:46.816 "name": "BaseBdev2", 00:09:46.816 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:46.816 "is_configured": true, 00:09:46.816 "data_offset": 0, 00:09:46.816 "data_size": 65536 00:09:46.816 }, 00:09:46.816 { 00:09:46.816 "name": "BaseBdev3", 00:09:46.816 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:46.816 "is_configured": true, 00:09:46.816 "data_offset": 0, 00:09:46.816 "data_size": 65536 00:09:46.816 } 00:09:46.816 ] 00:09:46.816 }' 00:09:46.816 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.816 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:47.074 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.075 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.075 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.075 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.075 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 058d18b9-2ed6-43f5-ad08-e1cc6a9c8136 00:09:47.075 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.075 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.334 [2024-11-17 01:29:55.533879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:47.334 [2024-11-17 01:29:55.533926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.334 [2024-11-17 01:29:55.533936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:47.334 [2024-11-17 01:29:55.534187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:47.334 [2024-11-17 01:29:55.534331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:47.334 [2024-11-17 01:29:55.534340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:47.334 [2024-11-17 01:29:55.534586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.334 NewBaseBdev 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.334 [ 00:09:47.334 { 00:09:47.334 "name": "NewBaseBdev", 00:09:47.334 "aliases": [ 00:09:47.334 "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136" 00:09:47.334 ], 00:09:47.334 "product_name": "Malloc disk", 00:09:47.334 "block_size": 512, 00:09:47.334 "num_blocks": 65536, 00:09:47.334 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:47.334 "assigned_rate_limits": { 00:09:47.334 "rw_ios_per_sec": 0, 00:09:47.334 "rw_mbytes_per_sec": 0, 00:09:47.334 "r_mbytes_per_sec": 0, 00:09:47.334 "w_mbytes_per_sec": 0 00:09:47.334 }, 00:09:47.334 "claimed": true, 00:09:47.334 "claim_type": "exclusive_write", 00:09:47.334 "zoned": false, 00:09:47.334 "supported_io_types": { 00:09:47.334 "read": true, 00:09:47.334 "write": true, 00:09:47.334 "unmap": true, 00:09:47.334 "flush": true, 00:09:47.334 "reset": true, 00:09:47.334 "nvme_admin": false, 00:09:47.334 "nvme_io": false, 00:09:47.334 "nvme_io_md": false, 00:09:47.334 "write_zeroes": true, 00:09:47.334 "zcopy": true, 00:09:47.334 "get_zone_info": false, 00:09:47.334 "zone_management": false, 00:09:47.334 "zone_append": false, 00:09:47.334 "compare": false, 00:09:47.334 "compare_and_write": false, 00:09:47.334 "abort": true, 00:09:47.334 "seek_hole": false, 00:09:47.334 "seek_data": false, 00:09:47.334 "copy": true, 00:09:47.334 "nvme_iov_md": false 00:09:47.334 }, 00:09:47.334 "memory_domains": [ 00:09:47.334 { 00:09:47.334 "dma_device_id": "system", 00:09:47.334 "dma_device_type": 1 00:09:47.334 }, 00:09:47.334 { 00:09:47.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.334 "dma_device_type": 2 00:09:47.334 } 00:09:47.334 ], 00:09:47.334 "driver_specific": {} 00:09:47.334 } 00:09:47.334 ] 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.334 "name": "Existed_Raid", 00:09:47.334 "uuid": "389718f3-1bf8-4351-a7d1-6af4f1c7162d", 00:09:47.334 "strip_size_kb": 64, 00:09:47.334 "state": "online", 00:09:47.334 "raid_level": "concat", 00:09:47.334 "superblock": false, 00:09:47.334 "num_base_bdevs": 3, 00:09:47.334 "num_base_bdevs_discovered": 3, 00:09:47.334 "num_base_bdevs_operational": 3, 00:09:47.334 "base_bdevs_list": [ 00:09:47.334 { 00:09:47.334 "name": "NewBaseBdev", 00:09:47.334 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:47.334 "is_configured": true, 00:09:47.334 "data_offset": 0, 00:09:47.334 "data_size": 65536 00:09:47.334 }, 00:09:47.334 { 00:09:47.334 "name": "BaseBdev2", 00:09:47.334 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:47.334 "is_configured": true, 00:09:47.334 "data_offset": 0, 00:09:47.334 "data_size": 65536 00:09:47.334 }, 00:09:47.334 { 00:09:47.334 "name": "BaseBdev3", 00:09:47.334 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:47.334 "is_configured": true, 00:09:47.334 "data_offset": 0, 00:09:47.334 "data_size": 65536 00:09:47.334 } 00:09:47.334 ] 00:09:47.334 }' 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.334 01:29:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.595 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.595 [2024-11-17 01:29:56.037347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.860 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.860 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.860 "name": "Existed_Raid", 00:09:47.860 "aliases": [ 00:09:47.860 "389718f3-1bf8-4351-a7d1-6af4f1c7162d" 00:09:47.860 ], 00:09:47.860 "product_name": "Raid Volume", 00:09:47.860 "block_size": 512, 00:09:47.860 "num_blocks": 196608, 00:09:47.860 "uuid": "389718f3-1bf8-4351-a7d1-6af4f1c7162d", 00:09:47.860 "assigned_rate_limits": { 00:09:47.860 "rw_ios_per_sec": 0, 00:09:47.860 "rw_mbytes_per_sec": 0, 00:09:47.860 "r_mbytes_per_sec": 0, 00:09:47.860 "w_mbytes_per_sec": 0 00:09:47.861 }, 00:09:47.861 "claimed": false, 00:09:47.861 "zoned": false, 00:09:47.861 "supported_io_types": { 00:09:47.861 "read": true, 00:09:47.861 "write": true, 00:09:47.861 "unmap": true, 00:09:47.861 "flush": true, 00:09:47.861 "reset": true, 00:09:47.861 "nvme_admin": false, 00:09:47.861 "nvme_io": false, 00:09:47.861 "nvme_io_md": false, 00:09:47.861 "write_zeroes": true, 00:09:47.861 "zcopy": false, 00:09:47.861 "get_zone_info": false, 00:09:47.861 "zone_management": false, 00:09:47.861 "zone_append": false, 00:09:47.861 "compare": false, 00:09:47.861 "compare_and_write": false, 00:09:47.861 "abort": false, 00:09:47.861 "seek_hole": false, 00:09:47.861 "seek_data": false, 00:09:47.861 "copy": false, 00:09:47.861 "nvme_iov_md": false 00:09:47.861 }, 00:09:47.861 "memory_domains": [ 00:09:47.861 { 00:09:47.861 "dma_device_id": "system", 00:09:47.861 "dma_device_type": 1 00:09:47.861 }, 00:09:47.861 { 00:09:47.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.861 "dma_device_type": 2 00:09:47.861 }, 00:09:47.861 { 00:09:47.861 "dma_device_id": "system", 00:09:47.861 "dma_device_type": 1 00:09:47.861 }, 00:09:47.861 { 00:09:47.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.861 "dma_device_type": 2 00:09:47.861 }, 00:09:47.861 { 00:09:47.861 "dma_device_id": "system", 00:09:47.861 "dma_device_type": 1 00:09:47.861 }, 00:09:47.861 { 00:09:47.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.861 "dma_device_type": 2 00:09:47.861 } 00:09:47.861 ], 00:09:47.861 "driver_specific": { 00:09:47.861 "raid": { 00:09:47.861 "uuid": "389718f3-1bf8-4351-a7d1-6af4f1c7162d", 00:09:47.861 "strip_size_kb": 64, 00:09:47.861 "state": "online", 00:09:47.861 "raid_level": "concat", 00:09:47.861 "superblock": false, 00:09:47.861 "num_base_bdevs": 3, 00:09:47.861 "num_base_bdevs_discovered": 3, 00:09:47.861 "num_base_bdevs_operational": 3, 00:09:47.861 "base_bdevs_list": [ 00:09:47.861 { 00:09:47.861 "name": "NewBaseBdev", 00:09:47.861 "uuid": "058d18b9-2ed6-43f5-ad08-e1cc6a9c8136", 00:09:47.861 "is_configured": true, 00:09:47.861 "data_offset": 0, 00:09:47.861 "data_size": 65536 00:09:47.861 }, 00:09:47.861 { 00:09:47.861 "name": "BaseBdev2", 00:09:47.861 "uuid": "cab1b465-ce7f-4e95-96db-1d8205098ced", 00:09:47.861 "is_configured": true, 00:09:47.861 "data_offset": 0, 00:09:47.861 "data_size": 65536 00:09:47.861 }, 00:09:47.861 { 00:09:47.861 "name": "BaseBdev3", 00:09:47.861 "uuid": "77088299-0284-4d75-8234-a253148f430a", 00:09:47.861 "is_configured": true, 00:09:47.861 "data_offset": 0, 00:09:47.861 "data_size": 65536 00:09:47.861 } 00:09:47.861 ] 00:09:47.861 } 00:09:47.861 } 00:09:47.861 }' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:47.861 BaseBdev2 00:09:47.861 BaseBdev3' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.861 [2024-11-17 01:29:56.288579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.861 [2024-11-17 01:29:56.288606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.861 [2024-11-17 01:29:56.288681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.861 [2024-11-17 01:29:56.288733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.861 [2024-11-17 01:29:56.288745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65430 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65430 ']' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65430 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.861 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65430 00:09:48.120 killing process with pid 65430 00:09:48.120 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.120 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.120 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65430' 00:09:48.120 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65430 00:09:48.120 [2024-11-17 01:29:56.336443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.120 01:29:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65430 00:09:48.380 [2024-11-17 01:29:56.633256] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.318 01:29:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.318 00:09:49.318 real 0m10.361s 00:09:49.318 user 0m16.424s 00:09:49.318 sys 0m1.847s 00:09:49.318 ************************************ 00:09:49.318 END TEST raid_state_function_test 00:09:49.318 ************************************ 00:09:49.318 01:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.318 01:29:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.318 01:29:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:49.318 01:29:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.318 01:29:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.318 01:29:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.577 ************************************ 00:09:49.577 START TEST raid_state_function_test_sb 00:09:49.577 ************************************ 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:49.577 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66051 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.578 Process raid pid: 66051 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66051' 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66051 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66051 ']' 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.578 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.578 [2024-11-17 01:29:57.877321] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:49.578 [2024-11-17 01:29:57.877510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.578 [2024-11-17 01:29:58.032257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.836 [2024-11-17 01:29:58.145821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.094 [2024-11-17 01:29:58.342671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.094 [2024-11-17 01:29:58.342826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.352 [2024-11-17 01:29:58.706769] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.352 [2024-11-17 01:29:58.706821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.352 [2024-11-17 01:29:58.706832] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.352 [2024-11-17 01:29:58.706841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.352 [2024-11-17 01:29:58.706848] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.352 [2024-11-17 01:29:58.706856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.352 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.352 "name": "Existed_Raid", 00:09:50.352 "uuid": "b74ec030-3a80-4a81-b76c-fd9a88699d74", 00:09:50.352 "strip_size_kb": 64, 00:09:50.352 "state": "configuring", 00:09:50.352 "raid_level": "concat", 00:09:50.353 "superblock": true, 00:09:50.353 "num_base_bdevs": 3, 00:09:50.353 "num_base_bdevs_discovered": 0, 00:09:50.353 "num_base_bdevs_operational": 3, 00:09:50.353 "base_bdevs_list": [ 00:09:50.353 { 00:09:50.353 "name": "BaseBdev1", 00:09:50.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.353 "is_configured": false, 00:09:50.353 "data_offset": 0, 00:09:50.353 "data_size": 0 00:09:50.353 }, 00:09:50.353 { 00:09:50.353 "name": "BaseBdev2", 00:09:50.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.353 "is_configured": false, 00:09:50.353 "data_offset": 0, 00:09:50.353 "data_size": 0 00:09:50.353 }, 00:09:50.353 { 00:09:50.353 "name": "BaseBdev3", 00:09:50.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.353 "is_configured": false, 00:09:50.353 "data_offset": 0, 00:09:50.353 "data_size": 0 00:09:50.353 } 00:09:50.353 ] 00:09:50.353 }' 00:09:50.353 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.353 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.918 [2024-11-17 01:29:59.141906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.918 [2024-11-17 01:29:59.141984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.918 [2024-11-17 01:29:59.149907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.918 [2024-11-17 01:29:59.149982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.918 [2024-11-17 01:29:59.150009] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.918 [2024-11-17 01:29:59.150031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.918 [2024-11-17 01:29:59.150061] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.918 [2024-11-17 01:29:59.150081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.918 [2024-11-17 01:29:59.191268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.918 BaseBdev1 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.918 [ 00:09:50.918 { 00:09:50.918 "name": "BaseBdev1", 00:09:50.918 "aliases": [ 00:09:50.918 "b468a053-5e87-4d6d-a7c2-8310e14d9fc5" 00:09:50.918 ], 00:09:50.918 "product_name": "Malloc disk", 00:09:50.918 "block_size": 512, 00:09:50.918 "num_blocks": 65536, 00:09:50.918 "uuid": "b468a053-5e87-4d6d-a7c2-8310e14d9fc5", 00:09:50.918 "assigned_rate_limits": { 00:09:50.918 "rw_ios_per_sec": 0, 00:09:50.918 "rw_mbytes_per_sec": 0, 00:09:50.918 "r_mbytes_per_sec": 0, 00:09:50.918 "w_mbytes_per_sec": 0 00:09:50.918 }, 00:09:50.918 "claimed": true, 00:09:50.918 "claim_type": "exclusive_write", 00:09:50.918 "zoned": false, 00:09:50.918 "supported_io_types": { 00:09:50.918 "read": true, 00:09:50.918 "write": true, 00:09:50.918 "unmap": true, 00:09:50.918 "flush": true, 00:09:50.918 "reset": true, 00:09:50.918 "nvme_admin": false, 00:09:50.918 "nvme_io": false, 00:09:50.918 "nvme_io_md": false, 00:09:50.918 "write_zeroes": true, 00:09:50.918 "zcopy": true, 00:09:50.918 "get_zone_info": false, 00:09:50.918 "zone_management": false, 00:09:50.918 "zone_append": false, 00:09:50.918 "compare": false, 00:09:50.918 "compare_and_write": false, 00:09:50.918 "abort": true, 00:09:50.918 "seek_hole": false, 00:09:50.918 "seek_data": false, 00:09:50.918 "copy": true, 00:09:50.918 "nvme_iov_md": false 00:09:50.918 }, 00:09:50.918 "memory_domains": [ 00:09:50.918 { 00:09:50.918 "dma_device_id": "system", 00:09:50.918 "dma_device_type": 1 00:09:50.918 }, 00:09:50.918 { 00:09:50.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.918 "dma_device_type": 2 00:09:50.918 } 00:09:50.918 ], 00:09:50.918 "driver_specific": {} 00:09:50.918 } 00:09:50.918 ] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.918 "name": "Existed_Raid", 00:09:50.918 "uuid": "212779fc-de72-42f3-a1fb-6e0d4f931095", 00:09:50.918 "strip_size_kb": 64, 00:09:50.918 "state": "configuring", 00:09:50.918 "raid_level": "concat", 00:09:50.918 "superblock": true, 00:09:50.918 "num_base_bdevs": 3, 00:09:50.918 "num_base_bdevs_discovered": 1, 00:09:50.918 "num_base_bdevs_operational": 3, 00:09:50.918 "base_bdevs_list": [ 00:09:50.918 { 00:09:50.918 "name": "BaseBdev1", 00:09:50.918 "uuid": "b468a053-5e87-4d6d-a7c2-8310e14d9fc5", 00:09:50.918 "is_configured": true, 00:09:50.918 "data_offset": 2048, 00:09:50.918 "data_size": 63488 00:09:50.918 }, 00:09:50.918 { 00:09:50.918 "name": "BaseBdev2", 00:09:50.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.918 "is_configured": false, 00:09:50.918 "data_offset": 0, 00:09:50.918 "data_size": 0 00:09:50.918 }, 00:09:50.918 { 00:09:50.918 "name": "BaseBdev3", 00:09:50.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.918 "is_configured": false, 00:09:50.918 "data_offset": 0, 00:09:50.918 "data_size": 0 00:09:50.918 } 00:09:50.918 ] 00:09:50.918 }' 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.918 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 [2024-11-17 01:29:59.674470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.484 [2024-11-17 01:29:59.674565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 [2024-11-17 01:29:59.686506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.484 [2024-11-17 01:29:59.688294] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.484 [2024-11-17 01:29:59.688337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.484 [2024-11-17 01:29:59.688346] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.484 [2024-11-17 01:29:59.688355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.484 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.484 "name": "Existed_Raid", 00:09:51.484 "uuid": "320abf9c-324e-4f55-8dbc-ff0d60e45f04", 00:09:51.484 "strip_size_kb": 64, 00:09:51.484 "state": "configuring", 00:09:51.484 "raid_level": "concat", 00:09:51.484 "superblock": true, 00:09:51.484 "num_base_bdevs": 3, 00:09:51.484 "num_base_bdevs_discovered": 1, 00:09:51.484 "num_base_bdevs_operational": 3, 00:09:51.485 "base_bdevs_list": [ 00:09:51.485 { 00:09:51.485 "name": "BaseBdev1", 00:09:51.485 "uuid": "b468a053-5e87-4d6d-a7c2-8310e14d9fc5", 00:09:51.485 "is_configured": true, 00:09:51.485 "data_offset": 2048, 00:09:51.485 "data_size": 63488 00:09:51.485 }, 00:09:51.485 { 00:09:51.485 "name": "BaseBdev2", 00:09:51.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.485 "is_configured": false, 00:09:51.485 "data_offset": 0, 00:09:51.485 "data_size": 0 00:09:51.485 }, 00:09:51.485 { 00:09:51.485 "name": "BaseBdev3", 00:09:51.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.485 "is_configured": false, 00:09:51.485 "data_offset": 0, 00:09:51.485 "data_size": 0 00:09:51.485 } 00:09:51.485 ] 00:09:51.485 }' 00:09:51.485 01:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.485 01:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.743 [2024-11-17 01:30:00.163183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.743 BaseBdev2 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.743 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.743 [ 00:09:51.743 { 00:09:51.743 "name": "BaseBdev2", 00:09:51.743 "aliases": [ 00:09:51.743 "36a40813-7f68-4c99-a4f0-7816124e20bc" 00:09:51.743 ], 00:09:51.743 "product_name": "Malloc disk", 00:09:51.743 "block_size": 512, 00:09:51.743 "num_blocks": 65536, 00:09:51.743 "uuid": "36a40813-7f68-4c99-a4f0-7816124e20bc", 00:09:51.743 "assigned_rate_limits": { 00:09:51.743 "rw_ios_per_sec": 0, 00:09:51.743 "rw_mbytes_per_sec": 0, 00:09:51.743 "r_mbytes_per_sec": 0, 00:09:51.743 "w_mbytes_per_sec": 0 00:09:51.743 }, 00:09:51.743 "claimed": true, 00:09:51.743 "claim_type": "exclusive_write", 00:09:51.743 "zoned": false, 00:09:51.743 "supported_io_types": { 00:09:51.743 "read": true, 00:09:51.743 "write": true, 00:09:51.743 "unmap": true, 00:09:51.743 "flush": true, 00:09:51.743 "reset": true, 00:09:51.743 "nvme_admin": false, 00:09:51.743 "nvme_io": false, 00:09:51.743 "nvme_io_md": false, 00:09:51.743 "write_zeroes": true, 00:09:51.743 "zcopy": true, 00:09:51.743 "get_zone_info": false, 00:09:51.743 "zone_management": false, 00:09:51.743 "zone_append": false, 00:09:51.743 "compare": false, 00:09:51.743 "compare_and_write": false, 00:09:51.743 "abort": true, 00:09:51.743 "seek_hole": false, 00:09:51.743 "seek_data": false, 00:09:51.743 "copy": true, 00:09:51.743 "nvme_iov_md": false 00:09:51.743 }, 00:09:51.743 "memory_domains": [ 00:09:51.743 { 00:09:51.743 "dma_device_id": "system", 00:09:51.743 "dma_device_type": 1 00:09:51.743 }, 00:09:51.743 { 00:09:51.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.743 "dma_device_type": 2 00:09:51.743 } 00:09:51.743 ], 00:09:51.743 "driver_specific": {} 00:09:51.743 } 00:09:51.743 ] 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.002 "name": "Existed_Raid", 00:09:52.002 "uuid": "320abf9c-324e-4f55-8dbc-ff0d60e45f04", 00:09:52.002 "strip_size_kb": 64, 00:09:52.002 "state": "configuring", 00:09:52.002 "raid_level": "concat", 00:09:52.002 "superblock": true, 00:09:52.002 "num_base_bdevs": 3, 00:09:52.002 "num_base_bdevs_discovered": 2, 00:09:52.002 "num_base_bdevs_operational": 3, 00:09:52.002 "base_bdevs_list": [ 00:09:52.002 { 00:09:52.002 "name": "BaseBdev1", 00:09:52.002 "uuid": "b468a053-5e87-4d6d-a7c2-8310e14d9fc5", 00:09:52.002 "is_configured": true, 00:09:52.002 "data_offset": 2048, 00:09:52.002 "data_size": 63488 00:09:52.002 }, 00:09:52.002 { 00:09:52.002 "name": "BaseBdev2", 00:09:52.002 "uuid": "36a40813-7f68-4c99-a4f0-7816124e20bc", 00:09:52.002 "is_configured": true, 00:09:52.002 "data_offset": 2048, 00:09:52.002 "data_size": 63488 00:09:52.002 }, 00:09:52.002 { 00:09:52.002 "name": "BaseBdev3", 00:09:52.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.002 "is_configured": false, 00:09:52.002 "data_offset": 0, 00:09:52.002 "data_size": 0 00:09:52.002 } 00:09:52.002 ] 00:09:52.002 }' 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.002 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.261 [2024-11-17 01:30:00.694133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.261 [2024-11-17 01:30:00.694439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.261 BaseBdev3 00:09:52.261 [2024-11-17 01:30:00.694499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:52.261 [2024-11-17 01:30:00.694785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.261 [2024-11-17 01:30:00.694947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.261 [2024-11-17 01:30:00.694957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:52.261 [2024-11-17 01:30:00.695117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.261 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.520 [ 00:09:52.520 { 00:09:52.520 "name": "BaseBdev3", 00:09:52.520 "aliases": [ 00:09:52.520 "ee23ce66-6a4d-467d-ab1b-5957f2113029" 00:09:52.520 ], 00:09:52.520 "product_name": "Malloc disk", 00:09:52.520 "block_size": 512, 00:09:52.520 "num_blocks": 65536, 00:09:52.520 "uuid": "ee23ce66-6a4d-467d-ab1b-5957f2113029", 00:09:52.520 "assigned_rate_limits": { 00:09:52.520 "rw_ios_per_sec": 0, 00:09:52.520 "rw_mbytes_per_sec": 0, 00:09:52.520 "r_mbytes_per_sec": 0, 00:09:52.520 "w_mbytes_per_sec": 0 00:09:52.520 }, 00:09:52.520 "claimed": true, 00:09:52.520 "claim_type": "exclusive_write", 00:09:52.520 "zoned": false, 00:09:52.520 "supported_io_types": { 00:09:52.520 "read": true, 00:09:52.520 "write": true, 00:09:52.520 "unmap": true, 00:09:52.520 "flush": true, 00:09:52.520 "reset": true, 00:09:52.520 "nvme_admin": false, 00:09:52.520 "nvme_io": false, 00:09:52.520 "nvme_io_md": false, 00:09:52.520 "write_zeroes": true, 00:09:52.520 "zcopy": true, 00:09:52.520 "get_zone_info": false, 00:09:52.520 "zone_management": false, 00:09:52.520 "zone_append": false, 00:09:52.520 "compare": false, 00:09:52.520 "compare_and_write": false, 00:09:52.520 "abort": true, 00:09:52.520 "seek_hole": false, 00:09:52.520 "seek_data": false, 00:09:52.520 "copy": true, 00:09:52.520 "nvme_iov_md": false 00:09:52.520 }, 00:09:52.520 "memory_domains": [ 00:09:52.520 { 00:09:52.520 "dma_device_id": "system", 00:09:52.520 "dma_device_type": 1 00:09:52.520 }, 00:09:52.520 { 00:09:52.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.520 "dma_device_type": 2 00:09:52.520 } 00:09:52.520 ], 00:09:52.520 "driver_specific": {} 00:09:52.520 } 00:09:52.520 ] 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.520 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.520 "name": "Existed_Raid", 00:09:52.520 "uuid": "320abf9c-324e-4f55-8dbc-ff0d60e45f04", 00:09:52.520 "strip_size_kb": 64, 00:09:52.520 "state": "online", 00:09:52.520 "raid_level": "concat", 00:09:52.520 "superblock": true, 00:09:52.520 "num_base_bdevs": 3, 00:09:52.520 "num_base_bdevs_discovered": 3, 00:09:52.520 "num_base_bdevs_operational": 3, 00:09:52.520 "base_bdevs_list": [ 00:09:52.520 { 00:09:52.520 "name": "BaseBdev1", 00:09:52.520 "uuid": "b468a053-5e87-4d6d-a7c2-8310e14d9fc5", 00:09:52.520 "is_configured": true, 00:09:52.520 "data_offset": 2048, 00:09:52.520 "data_size": 63488 00:09:52.520 }, 00:09:52.520 { 00:09:52.520 "name": "BaseBdev2", 00:09:52.520 "uuid": "36a40813-7f68-4c99-a4f0-7816124e20bc", 00:09:52.520 "is_configured": true, 00:09:52.520 "data_offset": 2048, 00:09:52.520 "data_size": 63488 00:09:52.520 }, 00:09:52.520 { 00:09:52.520 "name": "BaseBdev3", 00:09:52.520 "uuid": "ee23ce66-6a4d-467d-ab1b-5957f2113029", 00:09:52.521 "is_configured": true, 00:09:52.521 "data_offset": 2048, 00:09:52.521 "data_size": 63488 00:09:52.521 } 00:09:52.521 ] 00:09:52.521 }' 00:09:52.521 01:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.521 01:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.780 [2024-11-17 01:30:01.141645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.780 "name": "Existed_Raid", 00:09:52.780 "aliases": [ 00:09:52.780 "320abf9c-324e-4f55-8dbc-ff0d60e45f04" 00:09:52.780 ], 00:09:52.780 "product_name": "Raid Volume", 00:09:52.780 "block_size": 512, 00:09:52.780 "num_blocks": 190464, 00:09:52.780 "uuid": "320abf9c-324e-4f55-8dbc-ff0d60e45f04", 00:09:52.780 "assigned_rate_limits": { 00:09:52.780 "rw_ios_per_sec": 0, 00:09:52.780 "rw_mbytes_per_sec": 0, 00:09:52.780 "r_mbytes_per_sec": 0, 00:09:52.780 "w_mbytes_per_sec": 0 00:09:52.780 }, 00:09:52.780 "claimed": false, 00:09:52.780 "zoned": false, 00:09:52.780 "supported_io_types": { 00:09:52.780 "read": true, 00:09:52.780 "write": true, 00:09:52.780 "unmap": true, 00:09:52.780 "flush": true, 00:09:52.780 "reset": true, 00:09:52.780 "nvme_admin": false, 00:09:52.780 "nvme_io": false, 00:09:52.780 "nvme_io_md": false, 00:09:52.780 "write_zeroes": true, 00:09:52.780 "zcopy": false, 00:09:52.780 "get_zone_info": false, 00:09:52.780 "zone_management": false, 00:09:52.780 "zone_append": false, 00:09:52.780 "compare": false, 00:09:52.780 "compare_and_write": false, 00:09:52.780 "abort": false, 00:09:52.780 "seek_hole": false, 00:09:52.780 "seek_data": false, 00:09:52.780 "copy": false, 00:09:52.780 "nvme_iov_md": false 00:09:52.780 }, 00:09:52.780 "memory_domains": [ 00:09:52.780 { 00:09:52.780 "dma_device_id": "system", 00:09:52.780 "dma_device_type": 1 00:09:52.780 }, 00:09:52.780 { 00:09:52.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.780 "dma_device_type": 2 00:09:52.780 }, 00:09:52.780 { 00:09:52.780 "dma_device_id": "system", 00:09:52.780 "dma_device_type": 1 00:09:52.780 }, 00:09:52.780 { 00:09:52.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.780 "dma_device_type": 2 00:09:52.780 }, 00:09:52.780 { 00:09:52.780 "dma_device_id": "system", 00:09:52.780 "dma_device_type": 1 00:09:52.780 }, 00:09:52.780 { 00:09:52.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.780 "dma_device_type": 2 00:09:52.780 } 00:09:52.780 ], 00:09:52.780 "driver_specific": { 00:09:52.780 "raid": { 00:09:52.780 "uuid": "320abf9c-324e-4f55-8dbc-ff0d60e45f04", 00:09:52.780 "strip_size_kb": 64, 00:09:52.780 "state": "online", 00:09:52.780 "raid_level": "concat", 00:09:52.780 "superblock": true, 00:09:52.780 "num_base_bdevs": 3, 00:09:52.780 "num_base_bdevs_discovered": 3, 00:09:52.780 "num_base_bdevs_operational": 3, 00:09:52.780 "base_bdevs_list": [ 00:09:52.780 { 00:09:52.780 "name": "BaseBdev1", 00:09:52.780 "uuid": "b468a053-5e87-4d6d-a7c2-8310e14d9fc5", 00:09:52.780 "is_configured": true, 00:09:52.780 "data_offset": 2048, 00:09:52.780 "data_size": 63488 00:09:52.780 }, 00:09:52.780 { 00:09:52.780 "name": "BaseBdev2", 00:09:52.780 "uuid": "36a40813-7f68-4c99-a4f0-7816124e20bc", 00:09:52.780 "is_configured": true, 00:09:52.780 "data_offset": 2048, 00:09:52.780 "data_size": 63488 00:09:52.780 }, 00:09:52.780 { 00:09:52.780 "name": "BaseBdev3", 00:09:52.780 "uuid": "ee23ce66-6a4d-467d-ab1b-5957f2113029", 00:09:52.780 "is_configured": true, 00:09:52.780 "data_offset": 2048, 00:09:52.780 "data_size": 63488 00:09:52.780 } 00:09:52.780 ] 00:09:52.780 } 00:09:52.780 } 00:09:52.780 }' 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:52.780 BaseBdev2 00:09:52.780 BaseBdev3' 00:09:52.780 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.038 [2024-11-17 01:30:01.393016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.038 [2024-11-17 01:30:01.393043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.038 [2024-11-17 01:30:01.393095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.038 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.296 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.296 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.296 "name": "Existed_Raid", 00:09:53.296 "uuid": "320abf9c-324e-4f55-8dbc-ff0d60e45f04", 00:09:53.296 "strip_size_kb": 64, 00:09:53.296 "state": "offline", 00:09:53.296 "raid_level": "concat", 00:09:53.296 "superblock": true, 00:09:53.296 "num_base_bdevs": 3, 00:09:53.296 "num_base_bdevs_discovered": 2, 00:09:53.296 "num_base_bdevs_operational": 2, 00:09:53.296 "base_bdevs_list": [ 00:09:53.296 { 00:09:53.296 "name": null, 00:09:53.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.296 "is_configured": false, 00:09:53.296 "data_offset": 0, 00:09:53.296 "data_size": 63488 00:09:53.296 }, 00:09:53.296 { 00:09:53.296 "name": "BaseBdev2", 00:09:53.296 "uuid": "36a40813-7f68-4c99-a4f0-7816124e20bc", 00:09:53.296 "is_configured": true, 00:09:53.296 "data_offset": 2048, 00:09:53.296 "data_size": 63488 00:09:53.296 }, 00:09:53.296 { 00:09:53.296 "name": "BaseBdev3", 00:09:53.296 "uuid": "ee23ce66-6a4d-467d-ab1b-5957f2113029", 00:09:53.296 "is_configured": true, 00:09:53.296 "data_offset": 2048, 00:09:53.296 "data_size": 63488 00:09:53.296 } 00:09:53.296 ] 00:09:53.296 }' 00:09:53.296 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.296 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.554 01:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.554 [2024-11-17 01:30:01.957597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.813 [2024-11-17 01:30:02.112140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.813 [2024-11-17 01:30:02.112189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.813 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.073 BaseBdev2 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.073 [ 00:09:54.073 { 00:09:54.073 "name": "BaseBdev2", 00:09:54.073 "aliases": [ 00:09:54.073 "2e43f8d2-31c2-4940-b3fc-e2e0921048e7" 00:09:54.073 ], 00:09:54.073 "product_name": "Malloc disk", 00:09:54.073 "block_size": 512, 00:09:54.073 "num_blocks": 65536, 00:09:54.073 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:54.073 "assigned_rate_limits": { 00:09:54.073 "rw_ios_per_sec": 0, 00:09:54.073 "rw_mbytes_per_sec": 0, 00:09:54.073 "r_mbytes_per_sec": 0, 00:09:54.073 "w_mbytes_per_sec": 0 00:09:54.073 }, 00:09:54.073 "claimed": false, 00:09:54.073 "zoned": false, 00:09:54.073 "supported_io_types": { 00:09:54.073 "read": true, 00:09:54.073 "write": true, 00:09:54.073 "unmap": true, 00:09:54.073 "flush": true, 00:09:54.073 "reset": true, 00:09:54.073 "nvme_admin": false, 00:09:54.073 "nvme_io": false, 00:09:54.073 "nvme_io_md": false, 00:09:54.073 "write_zeroes": true, 00:09:54.073 "zcopy": true, 00:09:54.073 "get_zone_info": false, 00:09:54.073 "zone_management": false, 00:09:54.073 "zone_append": false, 00:09:54.073 "compare": false, 00:09:54.073 "compare_and_write": false, 00:09:54.073 "abort": true, 00:09:54.073 "seek_hole": false, 00:09:54.073 "seek_data": false, 00:09:54.073 "copy": true, 00:09:54.073 "nvme_iov_md": false 00:09:54.073 }, 00:09:54.073 "memory_domains": [ 00:09:54.073 { 00:09:54.073 "dma_device_id": "system", 00:09:54.073 "dma_device_type": 1 00:09:54.073 }, 00:09:54.073 { 00:09:54.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.073 "dma_device_type": 2 00:09:54.073 } 00:09:54.073 ], 00:09:54.073 "driver_specific": {} 00:09:54.073 } 00:09:54.073 ] 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.073 BaseBdev3 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:54.073 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.074 [ 00:09:54.074 { 00:09:54.074 "name": "BaseBdev3", 00:09:54.074 "aliases": [ 00:09:54.074 "dc951a9d-98fc-406e-952f-5c63f87032fe" 00:09:54.074 ], 00:09:54.074 "product_name": "Malloc disk", 00:09:54.074 "block_size": 512, 00:09:54.074 "num_blocks": 65536, 00:09:54.074 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:54.074 "assigned_rate_limits": { 00:09:54.074 "rw_ios_per_sec": 0, 00:09:54.074 "rw_mbytes_per_sec": 0, 00:09:54.074 "r_mbytes_per_sec": 0, 00:09:54.074 "w_mbytes_per_sec": 0 00:09:54.074 }, 00:09:54.074 "claimed": false, 00:09:54.074 "zoned": false, 00:09:54.074 "supported_io_types": { 00:09:54.074 "read": true, 00:09:54.074 "write": true, 00:09:54.074 "unmap": true, 00:09:54.074 "flush": true, 00:09:54.074 "reset": true, 00:09:54.074 "nvme_admin": false, 00:09:54.074 "nvme_io": false, 00:09:54.074 "nvme_io_md": false, 00:09:54.074 "write_zeroes": true, 00:09:54.074 "zcopy": true, 00:09:54.074 "get_zone_info": false, 00:09:54.074 "zone_management": false, 00:09:54.074 "zone_append": false, 00:09:54.074 "compare": false, 00:09:54.074 "compare_and_write": false, 00:09:54.074 "abort": true, 00:09:54.074 "seek_hole": false, 00:09:54.074 "seek_data": false, 00:09:54.074 "copy": true, 00:09:54.074 "nvme_iov_md": false 00:09:54.074 }, 00:09:54.074 "memory_domains": [ 00:09:54.074 { 00:09:54.074 "dma_device_id": "system", 00:09:54.074 "dma_device_type": 1 00:09:54.074 }, 00:09:54.074 { 00:09:54.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.074 "dma_device_type": 2 00:09:54.074 } 00:09:54.074 ], 00:09:54.074 "driver_specific": {} 00:09:54.074 } 00:09:54.074 ] 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.074 [2024-11-17 01:30:02.419564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.074 [2024-11-17 01:30:02.419648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.074 [2024-11-17 01:30:02.419690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.074 [2024-11-17 01:30:02.421369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.074 "name": "Existed_Raid", 00:09:54.074 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:54.074 "strip_size_kb": 64, 00:09:54.074 "state": "configuring", 00:09:54.074 "raid_level": "concat", 00:09:54.074 "superblock": true, 00:09:54.074 "num_base_bdevs": 3, 00:09:54.074 "num_base_bdevs_discovered": 2, 00:09:54.074 "num_base_bdevs_operational": 3, 00:09:54.074 "base_bdevs_list": [ 00:09:54.074 { 00:09:54.074 "name": "BaseBdev1", 00:09:54.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.074 "is_configured": false, 00:09:54.074 "data_offset": 0, 00:09:54.074 "data_size": 0 00:09:54.074 }, 00:09:54.074 { 00:09:54.074 "name": "BaseBdev2", 00:09:54.074 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:54.074 "is_configured": true, 00:09:54.074 "data_offset": 2048, 00:09:54.074 "data_size": 63488 00:09:54.074 }, 00:09:54.074 { 00:09:54.074 "name": "BaseBdev3", 00:09:54.074 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:54.074 "is_configured": true, 00:09:54.074 "data_offset": 2048, 00:09:54.074 "data_size": 63488 00:09:54.074 } 00:09:54.074 ] 00:09:54.074 }' 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.074 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.642 [2024-11-17 01:30:02.870865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.642 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.642 "name": "Existed_Raid", 00:09:54.642 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:54.642 "strip_size_kb": 64, 00:09:54.642 "state": "configuring", 00:09:54.642 "raid_level": "concat", 00:09:54.642 "superblock": true, 00:09:54.642 "num_base_bdevs": 3, 00:09:54.642 "num_base_bdevs_discovered": 1, 00:09:54.642 "num_base_bdevs_operational": 3, 00:09:54.642 "base_bdevs_list": [ 00:09:54.642 { 00:09:54.642 "name": "BaseBdev1", 00:09:54.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.642 "is_configured": false, 00:09:54.642 "data_offset": 0, 00:09:54.643 "data_size": 0 00:09:54.643 }, 00:09:54.643 { 00:09:54.643 "name": null, 00:09:54.643 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:54.643 "is_configured": false, 00:09:54.643 "data_offset": 0, 00:09:54.643 "data_size": 63488 00:09:54.643 }, 00:09:54.643 { 00:09:54.643 "name": "BaseBdev3", 00:09:54.643 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:54.643 "is_configured": true, 00:09:54.643 "data_offset": 2048, 00:09:54.643 "data_size": 63488 00:09:54.643 } 00:09:54.643 ] 00:09:54.643 }' 00:09:54.643 01:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.643 01:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.902 [2024-11-17 01:30:03.341363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.902 BaseBdev1 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.902 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.160 [ 00:09:55.160 { 00:09:55.160 "name": "BaseBdev1", 00:09:55.160 "aliases": [ 00:09:55.160 "643e5c20-d924-43bb-bcec-0e7d715034c0" 00:09:55.160 ], 00:09:55.160 "product_name": "Malloc disk", 00:09:55.160 "block_size": 512, 00:09:55.160 "num_blocks": 65536, 00:09:55.160 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:55.160 "assigned_rate_limits": { 00:09:55.160 "rw_ios_per_sec": 0, 00:09:55.160 "rw_mbytes_per_sec": 0, 00:09:55.160 "r_mbytes_per_sec": 0, 00:09:55.160 "w_mbytes_per_sec": 0 00:09:55.160 }, 00:09:55.160 "claimed": true, 00:09:55.160 "claim_type": "exclusive_write", 00:09:55.160 "zoned": false, 00:09:55.160 "supported_io_types": { 00:09:55.160 "read": true, 00:09:55.160 "write": true, 00:09:55.160 "unmap": true, 00:09:55.160 "flush": true, 00:09:55.160 "reset": true, 00:09:55.160 "nvme_admin": false, 00:09:55.160 "nvme_io": false, 00:09:55.160 "nvme_io_md": false, 00:09:55.160 "write_zeroes": true, 00:09:55.160 "zcopy": true, 00:09:55.160 "get_zone_info": false, 00:09:55.160 "zone_management": false, 00:09:55.160 "zone_append": false, 00:09:55.160 "compare": false, 00:09:55.160 "compare_and_write": false, 00:09:55.160 "abort": true, 00:09:55.160 "seek_hole": false, 00:09:55.160 "seek_data": false, 00:09:55.160 "copy": true, 00:09:55.160 "nvme_iov_md": false 00:09:55.160 }, 00:09:55.160 "memory_domains": [ 00:09:55.160 { 00:09:55.161 "dma_device_id": "system", 00:09:55.161 "dma_device_type": 1 00:09:55.161 }, 00:09:55.161 { 00:09:55.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.161 "dma_device_type": 2 00:09:55.161 } 00:09:55.161 ], 00:09:55.161 "driver_specific": {} 00:09:55.161 } 00:09:55.161 ] 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.161 "name": "Existed_Raid", 00:09:55.161 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:55.161 "strip_size_kb": 64, 00:09:55.161 "state": "configuring", 00:09:55.161 "raid_level": "concat", 00:09:55.161 "superblock": true, 00:09:55.161 "num_base_bdevs": 3, 00:09:55.161 "num_base_bdevs_discovered": 2, 00:09:55.161 "num_base_bdevs_operational": 3, 00:09:55.161 "base_bdevs_list": [ 00:09:55.161 { 00:09:55.161 "name": "BaseBdev1", 00:09:55.161 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:55.161 "is_configured": true, 00:09:55.161 "data_offset": 2048, 00:09:55.161 "data_size": 63488 00:09:55.161 }, 00:09:55.161 { 00:09:55.161 "name": null, 00:09:55.161 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:55.161 "is_configured": false, 00:09:55.161 "data_offset": 0, 00:09:55.161 "data_size": 63488 00:09:55.161 }, 00:09:55.161 { 00:09:55.161 "name": "BaseBdev3", 00:09:55.161 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:55.161 "is_configured": true, 00:09:55.161 "data_offset": 2048, 00:09:55.161 "data_size": 63488 00:09:55.161 } 00:09:55.161 ] 00:09:55.161 }' 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.161 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.420 [2024-11-17 01:30:03.840564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.420 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.679 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.679 "name": "Existed_Raid", 00:09:55.679 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:55.679 "strip_size_kb": 64, 00:09:55.679 "state": "configuring", 00:09:55.679 "raid_level": "concat", 00:09:55.679 "superblock": true, 00:09:55.679 "num_base_bdevs": 3, 00:09:55.679 "num_base_bdevs_discovered": 1, 00:09:55.679 "num_base_bdevs_operational": 3, 00:09:55.679 "base_bdevs_list": [ 00:09:55.679 { 00:09:55.679 "name": "BaseBdev1", 00:09:55.679 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:55.679 "is_configured": true, 00:09:55.679 "data_offset": 2048, 00:09:55.679 "data_size": 63488 00:09:55.679 }, 00:09:55.679 { 00:09:55.679 "name": null, 00:09:55.679 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:55.679 "is_configured": false, 00:09:55.679 "data_offset": 0, 00:09:55.679 "data_size": 63488 00:09:55.679 }, 00:09:55.679 { 00:09:55.679 "name": null, 00:09:55.679 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:55.679 "is_configured": false, 00:09:55.679 "data_offset": 0, 00:09:55.679 "data_size": 63488 00:09:55.679 } 00:09:55.679 ] 00:09:55.679 }' 00:09:55.679 01:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.680 01:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.939 [2024-11-17 01:30:04.251863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.939 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.939 "name": "Existed_Raid", 00:09:55.939 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:55.939 "strip_size_kb": 64, 00:09:55.939 "state": "configuring", 00:09:55.939 "raid_level": "concat", 00:09:55.939 "superblock": true, 00:09:55.939 "num_base_bdevs": 3, 00:09:55.939 "num_base_bdevs_discovered": 2, 00:09:55.939 "num_base_bdevs_operational": 3, 00:09:55.939 "base_bdevs_list": [ 00:09:55.940 { 00:09:55.940 "name": "BaseBdev1", 00:09:55.940 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:55.940 "is_configured": true, 00:09:55.940 "data_offset": 2048, 00:09:55.940 "data_size": 63488 00:09:55.940 }, 00:09:55.940 { 00:09:55.940 "name": null, 00:09:55.940 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:55.940 "is_configured": false, 00:09:55.940 "data_offset": 0, 00:09:55.940 "data_size": 63488 00:09:55.940 }, 00:09:55.940 { 00:09:55.940 "name": "BaseBdev3", 00:09:55.940 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:55.940 "is_configured": true, 00:09:55.940 "data_offset": 2048, 00:09:55.940 "data_size": 63488 00:09:55.940 } 00:09:55.940 ] 00:09:55.940 }' 00:09:55.940 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.940 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.508 [2024-11-17 01:30:04.739080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.508 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.509 "name": "Existed_Raid", 00:09:56.509 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:56.509 "strip_size_kb": 64, 00:09:56.509 "state": "configuring", 00:09:56.509 "raid_level": "concat", 00:09:56.509 "superblock": true, 00:09:56.509 "num_base_bdevs": 3, 00:09:56.509 "num_base_bdevs_discovered": 1, 00:09:56.509 "num_base_bdevs_operational": 3, 00:09:56.509 "base_bdevs_list": [ 00:09:56.509 { 00:09:56.509 "name": null, 00:09:56.509 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:56.509 "is_configured": false, 00:09:56.509 "data_offset": 0, 00:09:56.509 "data_size": 63488 00:09:56.509 }, 00:09:56.509 { 00:09:56.509 "name": null, 00:09:56.509 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:56.509 "is_configured": false, 00:09:56.509 "data_offset": 0, 00:09:56.509 "data_size": 63488 00:09:56.509 }, 00:09:56.509 { 00:09:56.509 "name": "BaseBdev3", 00:09:56.509 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:56.509 "is_configured": true, 00:09:56.509 "data_offset": 2048, 00:09:56.509 "data_size": 63488 00:09:56.509 } 00:09:56.509 ] 00:09:56.509 }' 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.509 01:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 [2024-11-17 01:30:05.305676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.078 "name": "Existed_Raid", 00:09:57.078 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:57.078 "strip_size_kb": 64, 00:09:57.078 "state": "configuring", 00:09:57.078 "raid_level": "concat", 00:09:57.078 "superblock": true, 00:09:57.078 "num_base_bdevs": 3, 00:09:57.078 "num_base_bdevs_discovered": 2, 00:09:57.078 "num_base_bdevs_operational": 3, 00:09:57.078 "base_bdevs_list": [ 00:09:57.078 { 00:09:57.078 "name": null, 00:09:57.078 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:57.078 "is_configured": false, 00:09:57.078 "data_offset": 0, 00:09:57.078 "data_size": 63488 00:09:57.078 }, 00:09:57.078 { 00:09:57.078 "name": "BaseBdev2", 00:09:57.078 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:57.078 "is_configured": true, 00:09:57.078 "data_offset": 2048, 00:09:57.078 "data_size": 63488 00:09:57.078 }, 00:09:57.078 { 00:09:57.078 "name": "BaseBdev3", 00:09:57.078 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:57.078 "is_configured": true, 00:09:57.078 "data_offset": 2048, 00:09:57.078 "data_size": 63488 00:09:57.078 } 00:09:57.078 ] 00:09:57.078 }' 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.078 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.337 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.337 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.337 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 643e5c20-d924-43bb-bcec-0e7d715034c0 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.597 [2024-11-17 01:30:05.887966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:57.597 [2024-11-17 01:30:05.888228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.597 [2024-11-17 01:30:05.888266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:57.597 NewBaseBdev 00:09:57.597 [2024-11-17 01:30:05.888580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:57.597 [2024-11-17 01:30:05.888716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.597 [2024-11-17 01:30:05.888726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:57.597 [2024-11-17 01:30:05.888885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.597 [ 00:09:57.597 { 00:09:57.597 "name": "NewBaseBdev", 00:09:57.597 "aliases": [ 00:09:57.597 "643e5c20-d924-43bb-bcec-0e7d715034c0" 00:09:57.597 ], 00:09:57.597 "product_name": "Malloc disk", 00:09:57.597 "block_size": 512, 00:09:57.597 "num_blocks": 65536, 00:09:57.597 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:57.597 "assigned_rate_limits": { 00:09:57.597 "rw_ios_per_sec": 0, 00:09:57.597 "rw_mbytes_per_sec": 0, 00:09:57.597 "r_mbytes_per_sec": 0, 00:09:57.597 "w_mbytes_per_sec": 0 00:09:57.597 }, 00:09:57.597 "claimed": true, 00:09:57.597 "claim_type": "exclusive_write", 00:09:57.597 "zoned": false, 00:09:57.597 "supported_io_types": { 00:09:57.597 "read": true, 00:09:57.597 "write": true, 00:09:57.597 "unmap": true, 00:09:57.597 "flush": true, 00:09:57.597 "reset": true, 00:09:57.597 "nvme_admin": false, 00:09:57.597 "nvme_io": false, 00:09:57.597 "nvme_io_md": false, 00:09:57.597 "write_zeroes": true, 00:09:57.597 "zcopy": true, 00:09:57.597 "get_zone_info": false, 00:09:57.597 "zone_management": false, 00:09:57.597 "zone_append": false, 00:09:57.597 "compare": false, 00:09:57.597 "compare_and_write": false, 00:09:57.597 "abort": true, 00:09:57.597 "seek_hole": false, 00:09:57.597 "seek_data": false, 00:09:57.597 "copy": true, 00:09:57.597 "nvme_iov_md": false 00:09:57.597 }, 00:09:57.597 "memory_domains": [ 00:09:57.597 { 00:09:57.597 "dma_device_id": "system", 00:09:57.597 "dma_device_type": 1 00:09:57.597 }, 00:09:57.597 { 00:09:57.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.597 "dma_device_type": 2 00:09:57.597 } 00:09:57.597 ], 00:09:57.597 "driver_specific": {} 00:09:57.597 } 00:09:57.597 ] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.597 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.598 "name": "Existed_Raid", 00:09:57.598 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:57.598 "strip_size_kb": 64, 00:09:57.598 "state": "online", 00:09:57.598 "raid_level": "concat", 00:09:57.598 "superblock": true, 00:09:57.598 "num_base_bdevs": 3, 00:09:57.598 "num_base_bdevs_discovered": 3, 00:09:57.598 "num_base_bdevs_operational": 3, 00:09:57.598 "base_bdevs_list": [ 00:09:57.598 { 00:09:57.598 "name": "NewBaseBdev", 00:09:57.598 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:57.598 "is_configured": true, 00:09:57.598 "data_offset": 2048, 00:09:57.598 "data_size": 63488 00:09:57.598 }, 00:09:57.598 { 00:09:57.598 "name": "BaseBdev2", 00:09:57.598 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:57.598 "is_configured": true, 00:09:57.598 "data_offset": 2048, 00:09:57.598 "data_size": 63488 00:09:57.598 }, 00:09:57.598 { 00:09:57.598 "name": "BaseBdev3", 00:09:57.598 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:57.598 "is_configured": true, 00:09:57.598 "data_offset": 2048, 00:09:57.598 "data_size": 63488 00:09:57.598 } 00:09:57.598 ] 00:09:57.598 }' 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.598 01:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.167 [2024-11-17 01:30:06.399438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.167 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.167 "name": "Existed_Raid", 00:09:58.167 "aliases": [ 00:09:58.167 "42f482d3-1374-4a27-8575-f1916f7bcf5e" 00:09:58.167 ], 00:09:58.167 "product_name": "Raid Volume", 00:09:58.167 "block_size": 512, 00:09:58.167 "num_blocks": 190464, 00:09:58.167 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:58.167 "assigned_rate_limits": { 00:09:58.167 "rw_ios_per_sec": 0, 00:09:58.167 "rw_mbytes_per_sec": 0, 00:09:58.167 "r_mbytes_per_sec": 0, 00:09:58.167 "w_mbytes_per_sec": 0 00:09:58.167 }, 00:09:58.167 "claimed": false, 00:09:58.167 "zoned": false, 00:09:58.167 "supported_io_types": { 00:09:58.167 "read": true, 00:09:58.167 "write": true, 00:09:58.167 "unmap": true, 00:09:58.167 "flush": true, 00:09:58.167 "reset": true, 00:09:58.167 "nvme_admin": false, 00:09:58.167 "nvme_io": false, 00:09:58.167 "nvme_io_md": false, 00:09:58.167 "write_zeroes": true, 00:09:58.167 "zcopy": false, 00:09:58.167 "get_zone_info": false, 00:09:58.167 "zone_management": false, 00:09:58.167 "zone_append": false, 00:09:58.167 "compare": false, 00:09:58.167 "compare_and_write": false, 00:09:58.167 "abort": false, 00:09:58.167 "seek_hole": false, 00:09:58.167 "seek_data": false, 00:09:58.167 "copy": false, 00:09:58.168 "nvme_iov_md": false 00:09:58.168 }, 00:09:58.168 "memory_domains": [ 00:09:58.168 { 00:09:58.168 "dma_device_id": "system", 00:09:58.168 "dma_device_type": 1 00:09:58.168 }, 00:09:58.168 { 00:09:58.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.168 "dma_device_type": 2 00:09:58.168 }, 00:09:58.168 { 00:09:58.168 "dma_device_id": "system", 00:09:58.168 "dma_device_type": 1 00:09:58.168 }, 00:09:58.168 { 00:09:58.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.168 "dma_device_type": 2 00:09:58.168 }, 00:09:58.168 { 00:09:58.168 "dma_device_id": "system", 00:09:58.168 "dma_device_type": 1 00:09:58.168 }, 00:09:58.168 { 00:09:58.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.168 "dma_device_type": 2 00:09:58.168 } 00:09:58.168 ], 00:09:58.168 "driver_specific": { 00:09:58.168 "raid": { 00:09:58.168 "uuid": "42f482d3-1374-4a27-8575-f1916f7bcf5e", 00:09:58.168 "strip_size_kb": 64, 00:09:58.168 "state": "online", 00:09:58.168 "raid_level": "concat", 00:09:58.168 "superblock": true, 00:09:58.168 "num_base_bdevs": 3, 00:09:58.168 "num_base_bdevs_discovered": 3, 00:09:58.168 "num_base_bdevs_operational": 3, 00:09:58.168 "base_bdevs_list": [ 00:09:58.168 { 00:09:58.168 "name": "NewBaseBdev", 00:09:58.168 "uuid": "643e5c20-d924-43bb-bcec-0e7d715034c0", 00:09:58.168 "is_configured": true, 00:09:58.168 "data_offset": 2048, 00:09:58.168 "data_size": 63488 00:09:58.168 }, 00:09:58.168 { 00:09:58.168 "name": "BaseBdev2", 00:09:58.168 "uuid": "2e43f8d2-31c2-4940-b3fc-e2e0921048e7", 00:09:58.168 "is_configured": true, 00:09:58.168 "data_offset": 2048, 00:09:58.168 "data_size": 63488 00:09:58.168 }, 00:09:58.168 { 00:09:58.168 "name": "BaseBdev3", 00:09:58.168 "uuid": "dc951a9d-98fc-406e-952f-5c63f87032fe", 00:09:58.168 "is_configured": true, 00:09:58.168 "data_offset": 2048, 00:09:58.168 "data_size": 63488 00:09:58.168 } 00:09:58.168 ] 00:09:58.168 } 00:09:58.168 } 00:09:58.168 }' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:58.168 BaseBdev2 00:09:58.168 BaseBdev3' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.168 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.428 [2024-11-17 01:30:06.686679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.428 [2024-11-17 01:30:06.686711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.428 [2024-11-17 01:30:06.686815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.428 [2024-11-17 01:30:06.686868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.428 [2024-11-17 01:30:06.686880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66051 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66051 ']' 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66051 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66051 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66051' 00:09:58.428 killing process with pid 66051 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66051 00:09:58.428 [2024-11-17 01:30:06.735394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.428 01:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66051 00:09:58.688 [2024-11-17 01:30:07.021683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.088 01:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:00.088 00:10:00.088 real 0m10.334s 00:10:00.088 user 0m16.495s 00:10:00.088 sys 0m1.822s 00:10:00.088 01:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.088 01:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.088 ************************************ 00:10:00.088 END TEST raid_state_function_test_sb 00:10:00.088 ************************************ 00:10:00.088 01:30:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:00.088 01:30:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:00.088 01:30:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.088 01:30:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.088 ************************************ 00:10:00.088 START TEST raid_superblock_test 00:10:00.088 ************************************ 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66670 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66670 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66670 ']' 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.088 01:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.088 [2024-11-17 01:30:08.277849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:00.088 [2024-11-17 01:30:08.278038] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66670 ] 00:10:00.088 [2024-11-17 01:30:08.450915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.348 [2024-11-17 01:30:08.563128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.348 [2024-11-17 01:30:08.759928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.348 [2024-11-17 01:30:08.759986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.919 malloc1 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.919 [2024-11-17 01:30:09.160415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.919 [2024-11-17 01:30:09.160586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.919 [2024-11-17 01:30:09.160627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:00.919 [2024-11-17 01:30:09.160656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.919 [2024-11-17 01:30:09.162686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.919 [2024-11-17 01:30:09.162755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.919 pt1 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.919 malloc2 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.919 [2024-11-17 01:30:09.216126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.919 [2024-11-17 01:30:09.216193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.919 [2024-11-17 01:30:09.216240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:00.919 [2024-11-17 01:30:09.216249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.919 [2024-11-17 01:30:09.218245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.919 [2024-11-17 01:30:09.218277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.919 pt2 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.919 malloc3 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.919 [2024-11-17 01:30:09.288658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:00.919 [2024-11-17 01:30:09.288793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.919 [2024-11-17 01:30:09.288847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:00.919 [2024-11-17 01:30:09.288878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.919 [2024-11-17 01:30:09.290922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.919 [2024-11-17 01:30:09.290996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:00.919 pt3 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:00.919 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.920 [2024-11-17 01:30:09.300691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.920 [2024-11-17 01:30:09.302412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.920 [2024-11-17 01:30:09.302487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:00.920 [2024-11-17 01:30:09.302633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:00.920 [2024-11-17 01:30:09.302647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:00.920 [2024-11-17 01:30:09.302911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.920 [2024-11-17 01:30:09.303076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:00.920 [2024-11-17 01:30:09.303087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:00.920 [2024-11-17 01:30:09.303245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.920 "name": "raid_bdev1", 00:10:00.920 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:00.920 "strip_size_kb": 64, 00:10:00.920 "state": "online", 00:10:00.920 "raid_level": "concat", 00:10:00.920 "superblock": true, 00:10:00.920 "num_base_bdevs": 3, 00:10:00.920 "num_base_bdevs_discovered": 3, 00:10:00.920 "num_base_bdevs_operational": 3, 00:10:00.920 "base_bdevs_list": [ 00:10:00.920 { 00:10:00.920 "name": "pt1", 00:10:00.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.920 "is_configured": true, 00:10:00.920 "data_offset": 2048, 00:10:00.920 "data_size": 63488 00:10:00.920 }, 00:10:00.920 { 00:10:00.920 "name": "pt2", 00:10:00.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.920 "is_configured": true, 00:10:00.920 "data_offset": 2048, 00:10:00.920 "data_size": 63488 00:10:00.920 }, 00:10:00.920 { 00:10:00.920 "name": "pt3", 00:10:00.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.920 "is_configured": true, 00:10:00.920 "data_offset": 2048, 00:10:00.920 "data_size": 63488 00:10:00.920 } 00:10:00.920 ] 00:10:00.920 }' 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.920 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.488 [2024-11-17 01:30:09.792116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.488 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.488 "name": "raid_bdev1", 00:10:01.488 "aliases": [ 00:10:01.488 "089afb91-4f8c-400e-a0db-3a35f846ab37" 00:10:01.488 ], 00:10:01.488 "product_name": "Raid Volume", 00:10:01.488 "block_size": 512, 00:10:01.488 "num_blocks": 190464, 00:10:01.488 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:01.488 "assigned_rate_limits": { 00:10:01.488 "rw_ios_per_sec": 0, 00:10:01.488 "rw_mbytes_per_sec": 0, 00:10:01.488 "r_mbytes_per_sec": 0, 00:10:01.488 "w_mbytes_per_sec": 0 00:10:01.488 }, 00:10:01.488 "claimed": false, 00:10:01.488 "zoned": false, 00:10:01.488 "supported_io_types": { 00:10:01.488 "read": true, 00:10:01.488 "write": true, 00:10:01.488 "unmap": true, 00:10:01.488 "flush": true, 00:10:01.488 "reset": true, 00:10:01.488 "nvme_admin": false, 00:10:01.488 "nvme_io": false, 00:10:01.488 "nvme_io_md": false, 00:10:01.488 "write_zeroes": true, 00:10:01.488 "zcopy": false, 00:10:01.488 "get_zone_info": false, 00:10:01.488 "zone_management": false, 00:10:01.488 "zone_append": false, 00:10:01.488 "compare": false, 00:10:01.488 "compare_and_write": false, 00:10:01.488 "abort": false, 00:10:01.488 "seek_hole": false, 00:10:01.488 "seek_data": false, 00:10:01.489 "copy": false, 00:10:01.489 "nvme_iov_md": false 00:10:01.489 }, 00:10:01.489 "memory_domains": [ 00:10:01.489 { 00:10:01.489 "dma_device_id": "system", 00:10:01.489 "dma_device_type": 1 00:10:01.489 }, 00:10:01.489 { 00:10:01.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.489 "dma_device_type": 2 00:10:01.489 }, 00:10:01.489 { 00:10:01.489 "dma_device_id": "system", 00:10:01.489 "dma_device_type": 1 00:10:01.489 }, 00:10:01.489 { 00:10:01.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.489 "dma_device_type": 2 00:10:01.489 }, 00:10:01.489 { 00:10:01.489 "dma_device_id": "system", 00:10:01.489 "dma_device_type": 1 00:10:01.489 }, 00:10:01.489 { 00:10:01.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.489 "dma_device_type": 2 00:10:01.489 } 00:10:01.489 ], 00:10:01.489 "driver_specific": { 00:10:01.489 "raid": { 00:10:01.489 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:01.489 "strip_size_kb": 64, 00:10:01.489 "state": "online", 00:10:01.489 "raid_level": "concat", 00:10:01.489 "superblock": true, 00:10:01.489 "num_base_bdevs": 3, 00:10:01.489 "num_base_bdevs_discovered": 3, 00:10:01.489 "num_base_bdevs_operational": 3, 00:10:01.489 "base_bdevs_list": [ 00:10:01.489 { 00:10:01.489 "name": "pt1", 00:10:01.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.489 "is_configured": true, 00:10:01.489 "data_offset": 2048, 00:10:01.489 "data_size": 63488 00:10:01.489 }, 00:10:01.489 { 00:10:01.489 "name": "pt2", 00:10:01.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.489 "is_configured": true, 00:10:01.489 "data_offset": 2048, 00:10:01.489 "data_size": 63488 00:10:01.489 }, 00:10:01.489 { 00:10:01.489 "name": "pt3", 00:10:01.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.489 "is_configured": true, 00:10:01.489 "data_offset": 2048, 00:10:01.489 "data_size": 63488 00:10:01.489 } 00:10:01.489 ] 00:10:01.489 } 00:10:01.489 } 00:10:01.489 }' 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:01.489 pt2 00:10:01.489 pt3' 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.489 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 01:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 [2024-11-17 01:30:10.071587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=089afb91-4f8c-400e-a0db-3a35f846ab37 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 089afb91-4f8c-400e-a0db-3a35f846ab37 ']' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 [2024-11-17 01:30:10.103268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.749 [2024-11-17 01:30:10.103303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.749 [2024-11-17 01:30:10.103385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.749 [2024-11-17 01:30:10.103447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.749 [2024-11-17 01:30:10.103463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.749 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.008 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.008 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:02.008 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.009 [2024-11-17 01:30:10.251088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:02.009 [2024-11-17 01:30:10.252847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:02.009 [2024-11-17 01:30:10.252901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:02.009 [2024-11-17 01:30:10.252946] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:02.009 [2024-11-17 01:30:10.252998] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:02.009 [2024-11-17 01:30:10.253016] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:02.009 [2024-11-17 01:30:10.253032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.009 [2024-11-17 01:30:10.253042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:02.009 request: 00:10:02.009 { 00:10:02.009 "name": "raid_bdev1", 00:10:02.009 "raid_level": "concat", 00:10:02.009 "base_bdevs": [ 00:10:02.009 "malloc1", 00:10:02.009 "malloc2", 00:10:02.009 "malloc3" 00:10:02.009 ], 00:10:02.009 "strip_size_kb": 64, 00:10:02.009 "superblock": false, 00:10:02.009 "method": "bdev_raid_create", 00:10:02.009 "req_id": 1 00:10:02.009 } 00:10:02.009 Got JSON-RPC error response 00:10:02.009 response: 00:10:02.009 { 00:10:02.009 "code": -17, 00:10:02.009 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:02.009 } 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.009 [2024-11-17 01:30:10.318895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.009 [2024-11-17 01:30:10.318946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.009 [2024-11-17 01:30:10.318964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:02.009 [2024-11-17 01:30:10.318974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.009 [2024-11-17 01:30:10.321096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.009 [2024-11-17 01:30:10.321132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.009 [2024-11-17 01:30:10.321202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:02.009 [2024-11-17 01:30:10.321248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.009 pt1 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.009 "name": "raid_bdev1", 00:10:02.009 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:02.009 "strip_size_kb": 64, 00:10:02.009 "state": "configuring", 00:10:02.009 "raid_level": "concat", 00:10:02.009 "superblock": true, 00:10:02.009 "num_base_bdevs": 3, 00:10:02.009 "num_base_bdevs_discovered": 1, 00:10:02.009 "num_base_bdevs_operational": 3, 00:10:02.009 "base_bdevs_list": [ 00:10:02.009 { 00:10:02.009 "name": "pt1", 00:10:02.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.009 "is_configured": true, 00:10:02.009 "data_offset": 2048, 00:10:02.009 "data_size": 63488 00:10:02.009 }, 00:10:02.009 { 00:10:02.009 "name": null, 00:10:02.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.009 "is_configured": false, 00:10:02.009 "data_offset": 2048, 00:10:02.009 "data_size": 63488 00:10:02.009 }, 00:10:02.009 { 00:10:02.009 "name": null, 00:10:02.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.009 "is_configured": false, 00:10:02.009 "data_offset": 2048, 00:10:02.009 "data_size": 63488 00:10:02.009 } 00:10:02.009 ] 00:10:02.009 }' 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.009 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.580 [2024-11-17 01:30:10.806095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.580 [2024-11-17 01:30:10.806162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.580 [2024-11-17 01:30:10.806185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:02.580 [2024-11-17 01:30:10.806193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.580 [2024-11-17 01:30:10.806618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.580 [2024-11-17 01:30:10.806647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.580 [2024-11-17 01:30:10.806731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.580 [2024-11-17 01:30:10.806767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.580 pt2 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.580 [2024-11-17 01:30:10.818075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.580 "name": "raid_bdev1", 00:10:02.580 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:02.580 "strip_size_kb": 64, 00:10:02.580 "state": "configuring", 00:10:02.580 "raid_level": "concat", 00:10:02.580 "superblock": true, 00:10:02.580 "num_base_bdevs": 3, 00:10:02.580 "num_base_bdevs_discovered": 1, 00:10:02.580 "num_base_bdevs_operational": 3, 00:10:02.580 "base_bdevs_list": [ 00:10:02.580 { 00:10:02.580 "name": "pt1", 00:10:02.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.580 "is_configured": true, 00:10:02.580 "data_offset": 2048, 00:10:02.580 "data_size": 63488 00:10:02.580 }, 00:10:02.580 { 00:10:02.580 "name": null, 00:10:02.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.580 "is_configured": false, 00:10:02.580 "data_offset": 0, 00:10:02.580 "data_size": 63488 00:10:02.580 }, 00:10:02.580 { 00:10:02.580 "name": null, 00:10:02.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.580 "is_configured": false, 00:10:02.580 "data_offset": 2048, 00:10:02.580 "data_size": 63488 00:10:02.580 } 00:10:02.580 ] 00:10:02.580 }' 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.580 01:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.839 [2024-11-17 01:30:11.257315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.839 [2024-11-17 01:30:11.257388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.839 [2024-11-17 01:30:11.257407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:02.839 [2024-11-17 01:30:11.257418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.839 [2024-11-17 01:30:11.257866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.839 [2024-11-17 01:30:11.257895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.839 [2024-11-17 01:30:11.257972] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.839 [2024-11-17 01:30:11.258001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.839 pt2 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.839 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.839 [2024-11-17 01:30:11.265273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:02.839 [2024-11-17 01:30:11.265320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.840 [2024-11-17 01:30:11.265333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:02.840 [2024-11-17 01:30:11.265343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.840 [2024-11-17 01:30:11.265684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.840 [2024-11-17 01:30:11.265724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:02.840 [2024-11-17 01:30:11.265795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:02.840 [2024-11-17 01:30:11.265816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:02.840 [2024-11-17 01:30:11.265931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.840 [2024-11-17 01:30:11.265948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:02.840 [2024-11-17 01:30:11.266171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:02.840 [2024-11-17 01:30:11.266307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.840 [2024-11-17 01:30:11.266316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:02.840 [2024-11-17 01:30:11.266443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.840 pt3 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.840 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.099 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.099 "name": "raid_bdev1", 00:10:03.099 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:03.099 "strip_size_kb": 64, 00:10:03.099 "state": "online", 00:10:03.099 "raid_level": "concat", 00:10:03.099 "superblock": true, 00:10:03.099 "num_base_bdevs": 3, 00:10:03.099 "num_base_bdevs_discovered": 3, 00:10:03.099 "num_base_bdevs_operational": 3, 00:10:03.099 "base_bdevs_list": [ 00:10:03.099 { 00:10:03.099 "name": "pt1", 00:10:03.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.099 "is_configured": true, 00:10:03.099 "data_offset": 2048, 00:10:03.099 "data_size": 63488 00:10:03.099 }, 00:10:03.099 { 00:10:03.099 "name": "pt2", 00:10:03.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.099 "is_configured": true, 00:10:03.099 "data_offset": 2048, 00:10:03.099 "data_size": 63488 00:10:03.099 }, 00:10:03.099 { 00:10:03.099 "name": "pt3", 00:10:03.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.099 "is_configured": true, 00:10:03.099 "data_offset": 2048, 00:10:03.099 "data_size": 63488 00:10:03.099 } 00:10:03.099 ] 00:10:03.099 }' 00:10:03.099 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.099 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.360 [2024-11-17 01:30:11.700856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.360 "name": "raid_bdev1", 00:10:03.360 "aliases": [ 00:10:03.360 "089afb91-4f8c-400e-a0db-3a35f846ab37" 00:10:03.360 ], 00:10:03.360 "product_name": "Raid Volume", 00:10:03.360 "block_size": 512, 00:10:03.360 "num_blocks": 190464, 00:10:03.360 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:03.360 "assigned_rate_limits": { 00:10:03.360 "rw_ios_per_sec": 0, 00:10:03.360 "rw_mbytes_per_sec": 0, 00:10:03.360 "r_mbytes_per_sec": 0, 00:10:03.360 "w_mbytes_per_sec": 0 00:10:03.360 }, 00:10:03.360 "claimed": false, 00:10:03.360 "zoned": false, 00:10:03.360 "supported_io_types": { 00:10:03.360 "read": true, 00:10:03.360 "write": true, 00:10:03.360 "unmap": true, 00:10:03.360 "flush": true, 00:10:03.360 "reset": true, 00:10:03.360 "nvme_admin": false, 00:10:03.360 "nvme_io": false, 00:10:03.360 "nvme_io_md": false, 00:10:03.360 "write_zeroes": true, 00:10:03.360 "zcopy": false, 00:10:03.360 "get_zone_info": false, 00:10:03.360 "zone_management": false, 00:10:03.360 "zone_append": false, 00:10:03.360 "compare": false, 00:10:03.360 "compare_and_write": false, 00:10:03.360 "abort": false, 00:10:03.360 "seek_hole": false, 00:10:03.360 "seek_data": false, 00:10:03.360 "copy": false, 00:10:03.360 "nvme_iov_md": false 00:10:03.360 }, 00:10:03.360 "memory_domains": [ 00:10:03.360 { 00:10:03.360 "dma_device_id": "system", 00:10:03.360 "dma_device_type": 1 00:10:03.360 }, 00:10:03.360 { 00:10:03.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.360 "dma_device_type": 2 00:10:03.360 }, 00:10:03.360 { 00:10:03.360 "dma_device_id": "system", 00:10:03.360 "dma_device_type": 1 00:10:03.360 }, 00:10:03.360 { 00:10:03.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.360 "dma_device_type": 2 00:10:03.360 }, 00:10:03.360 { 00:10:03.360 "dma_device_id": "system", 00:10:03.360 "dma_device_type": 1 00:10:03.360 }, 00:10:03.360 { 00:10:03.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.360 "dma_device_type": 2 00:10:03.360 } 00:10:03.360 ], 00:10:03.360 "driver_specific": { 00:10:03.360 "raid": { 00:10:03.360 "uuid": "089afb91-4f8c-400e-a0db-3a35f846ab37", 00:10:03.360 "strip_size_kb": 64, 00:10:03.360 "state": "online", 00:10:03.360 "raid_level": "concat", 00:10:03.360 "superblock": true, 00:10:03.360 "num_base_bdevs": 3, 00:10:03.360 "num_base_bdevs_discovered": 3, 00:10:03.360 "num_base_bdevs_operational": 3, 00:10:03.360 "base_bdevs_list": [ 00:10:03.360 { 00:10:03.360 "name": "pt1", 00:10:03.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.360 "is_configured": true, 00:10:03.360 "data_offset": 2048, 00:10:03.360 "data_size": 63488 00:10:03.360 }, 00:10:03.360 { 00:10:03.360 "name": "pt2", 00:10:03.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.360 "is_configured": true, 00:10:03.360 "data_offset": 2048, 00:10:03.360 "data_size": 63488 00:10:03.360 }, 00:10:03.360 { 00:10:03.360 "name": "pt3", 00:10:03.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.360 "is_configured": true, 00:10:03.360 "data_offset": 2048, 00:10:03.360 "data_size": 63488 00:10:03.360 } 00:10:03.360 ] 00:10:03.360 } 00:10:03.360 } 00:10:03.360 }' 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:03.360 pt2 00:10:03.360 pt3' 00:10:03.360 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.621 01:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.621 [2024-11-17 01:30:12.000294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 089afb91-4f8c-400e-a0db-3a35f846ab37 '!=' 089afb91-4f8c-400e-a0db-3a35f846ab37 ']' 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66670 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66670 ']' 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66670 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.621 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66670 00:10:03.881 killing process with pid 66670 00:10:03.881 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.881 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.881 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66670' 00:10:03.881 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66670 00:10:03.881 [2024-11-17 01:30:12.081577] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.881 [2024-11-17 01:30:12.081681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.881 01:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66670 00:10:03.881 [2024-11-17 01:30:12.081741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.881 [2024-11-17 01:30:12.081753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:04.139 [2024-11-17 01:30:12.365929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.078 01:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:05.078 00:10:05.078 real 0m5.229s 00:10:05.078 user 0m7.577s 00:10:05.078 sys 0m0.894s 00:10:05.078 01:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.078 01:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.078 ************************************ 00:10:05.078 END TEST raid_superblock_test 00:10:05.078 ************************************ 00:10:05.078 01:30:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:05.078 01:30:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:05.078 01:30:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.078 01:30:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.078 ************************************ 00:10:05.078 START TEST raid_read_error_test 00:10:05.078 ************************************ 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.foWJYkZlR9 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66920 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66920 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66920 ']' 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.078 01:30:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.337 [2024-11-17 01:30:13.598267] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:05.337 [2024-11-17 01:30:13.598372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66920 ] 00:10:05.337 [2024-11-17 01:30:13.769929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.596 [2024-11-17 01:30:13.879549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.926 [2024-11-17 01:30:14.080820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.926 [2024-11-17 01:30:14.080866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 BaseBdev1_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 true 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 [2024-11-17 01:30:14.474070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:06.187 [2024-11-17 01:30:14.474130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.187 [2024-11-17 01:30:14.474151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:06.187 [2024-11-17 01:30:14.474161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.187 [2024-11-17 01:30:14.476214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.187 [2024-11-17 01:30:14.476252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:06.187 BaseBdev1 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 BaseBdev2_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 true 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 [2024-11-17 01:30:14.540047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:06.187 [2024-11-17 01:30:14.540118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.187 [2024-11-17 01:30:14.540133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:06.187 [2024-11-17 01:30:14.540143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.187 [2024-11-17 01:30:14.542124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.187 [2024-11-17 01:30:14.542157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:06.187 BaseBdev2 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 BaseBdev3_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 true 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 [2024-11-17 01:30:14.615746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:06.187 [2024-11-17 01:30:14.615819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.187 [2024-11-17 01:30:14.615840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:06.187 [2024-11-17 01:30:14.615851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.187 [2024-11-17 01:30:14.617942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.187 [2024-11-17 01:30:14.617975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:06.187 BaseBdev3 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.187 [2024-11-17 01:30:14.627803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.187 [2024-11-17 01:30:14.629541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.187 [2024-11-17 01:30:14.629641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.187 [2024-11-17 01:30:14.629862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.187 [2024-11-17 01:30:14.629882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:06.187 [2024-11-17 01:30:14.630138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:06.187 [2024-11-17 01:30:14.630305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.187 [2024-11-17 01:30:14.630325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:06.187 [2024-11-17 01:30:14.630472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.187 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.447 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.447 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.447 "name": "raid_bdev1", 00:10:06.447 "uuid": "bdfb7412-b464-46a7-8990-54aede9fe0e6", 00:10:06.447 "strip_size_kb": 64, 00:10:06.447 "state": "online", 00:10:06.447 "raid_level": "concat", 00:10:06.447 "superblock": true, 00:10:06.447 "num_base_bdevs": 3, 00:10:06.447 "num_base_bdevs_discovered": 3, 00:10:06.447 "num_base_bdevs_operational": 3, 00:10:06.447 "base_bdevs_list": [ 00:10:06.447 { 00:10:06.447 "name": "BaseBdev1", 00:10:06.447 "uuid": "2ad76e2c-3db0-5662-9c5e-5d3bba975863", 00:10:06.447 "is_configured": true, 00:10:06.447 "data_offset": 2048, 00:10:06.447 "data_size": 63488 00:10:06.447 }, 00:10:06.447 { 00:10:06.447 "name": "BaseBdev2", 00:10:06.447 "uuid": "667dcd9e-2408-5b61-888d-637365042870", 00:10:06.447 "is_configured": true, 00:10:06.447 "data_offset": 2048, 00:10:06.447 "data_size": 63488 00:10:06.447 }, 00:10:06.447 { 00:10:06.447 "name": "BaseBdev3", 00:10:06.447 "uuid": "a2fbde6d-c583-5846-b6ec-abcb5540fcf5", 00:10:06.447 "is_configured": true, 00:10:06.447 "data_offset": 2048, 00:10:06.447 "data_size": 63488 00:10:06.447 } 00:10:06.447 ] 00:10:06.447 }' 00:10:06.447 01:30:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.447 01:30:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.706 01:30:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:06.706 01:30:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:06.965 [2024-11-17 01:30:15.188155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.904 "name": "raid_bdev1", 00:10:07.904 "uuid": "bdfb7412-b464-46a7-8990-54aede9fe0e6", 00:10:07.904 "strip_size_kb": 64, 00:10:07.904 "state": "online", 00:10:07.904 "raid_level": "concat", 00:10:07.904 "superblock": true, 00:10:07.904 "num_base_bdevs": 3, 00:10:07.904 "num_base_bdevs_discovered": 3, 00:10:07.904 "num_base_bdevs_operational": 3, 00:10:07.904 "base_bdevs_list": [ 00:10:07.904 { 00:10:07.904 "name": "BaseBdev1", 00:10:07.904 "uuid": "2ad76e2c-3db0-5662-9c5e-5d3bba975863", 00:10:07.904 "is_configured": true, 00:10:07.904 "data_offset": 2048, 00:10:07.904 "data_size": 63488 00:10:07.904 }, 00:10:07.904 { 00:10:07.904 "name": "BaseBdev2", 00:10:07.904 "uuid": "667dcd9e-2408-5b61-888d-637365042870", 00:10:07.904 "is_configured": true, 00:10:07.904 "data_offset": 2048, 00:10:07.904 "data_size": 63488 00:10:07.904 }, 00:10:07.904 { 00:10:07.904 "name": "BaseBdev3", 00:10:07.904 "uuid": "a2fbde6d-c583-5846-b6ec-abcb5540fcf5", 00:10:07.904 "is_configured": true, 00:10:07.904 "data_offset": 2048, 00:10:07.904 "data_size": 63488 00:10:07.904 } 00:10:07.904 ] 00:10:07.904 }' 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.904 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.164 [2024-11-17 01:30:16.571636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.164 [2024-11-17 01:30:16.571674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.164 [2024-11-17 01:30:16.574182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.164 [2024-11-17 01:30:16.574231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.164 [2024-11-17 01:30:16.574268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.164 [2024-11-17 01:30:16.574279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:08.164 { 00:10:08.164 "results": [ 00:10:08.164 { 00:10:08.164 "job": "raid_bdev1", 00:10:08.164 "core_mask": "0x1", 00:10:08.164 "workload": "randrw", 00:10:08.164 "percentage": 50, 00:10:08.164 "status": "finished", 00:10:08.164 "queue_depth": 1, 00:10:08.164 "io_size": 131072, 00:10:08.164 "runtime": 1.384351, 00:10:08.164 "iops": 16694.465493216678, 00:10:08.164 "mibps": 2086.8081866520847, 00:10:08.164 "io_failed": 1, 00:10:08.164 "io_timeout": 0, 00:10:08.164 "avg_latency_us": 83.28262873329192, 00:10:08.164 "min_latency_us": 24.482096069868994, 00:10:08.164 "max_latency_us": 1352.216593886463 00:10:08.164 } 00:10:08.164 ], 00:10:08.164 "core_count": 1 00:10:08.164 } 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66920 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66920 ']' 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66920 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66920 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.164 killing process with pid 66920 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66920' 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66920 00:10:08.164 [2024-11-17 01:30:16.611291] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.164 01:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66920 00:10:08.423 [2024-11-17 01:30:16.827330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.802 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.foWJYkZlR9 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:09.803 00:10:09.803 real 0m4.463s 00:10:09.803 user 0m5.339s 00:10:09.803 sys 0m0.533s 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.803 01:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.803 ************************************ 00:10:09.803 END TEST raid_read_error_test 00:10:09.803 ************************************ 00:10:09.803 01:30:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:09.803 01:30:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:09.803 01:30:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.803 01:30:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.803 ************************************ 00:10:09.803 START TEST raid_write_error_test 00:10:09.803 ************************************ 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WpvRrBmmCr 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67065 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67065 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67065 ']' 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.803 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.803 [2024-11-17 01:30:18.141064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:09.803 [2024-11-17 01:30:18.141184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67065 ] 00:10:10.062 [2024-11-17 01:30:18.321505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.062 [2024-11-17 01:30:18.434281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.321 [2024-11-17 01:30:18.630678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.321 [2024-11-17 01:30:18.630745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.580 BaseBdev1_malloc 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.580 01:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.580 true 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.580 [2024-11-17 01:30:19.017924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:10.580 [2024-11-17 01:30:19.017990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.580 [2024-11-17 01:30:19.018010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:10.580 [2024-11-17 01:30:19.018021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.580 [2024-11-17 01:30:19.020268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.580 [2024-11-17 01:30:19.020316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:10.580 BaseBdev1 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.580 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.840 BaseBdev2_malloc 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.840 true 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.840 [2024-11-17 01:30:19.081490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:10.840 [2024-11-17 01:30:19.081540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.840 [2024-11-17 01:30:19.081556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:10.840 [2024-11-17 01:30:19.081566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.840 [2024-11-17 01:30:19.083594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.840 [2024-11-17 01:30:19.083633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:10.840 BaseBdev2 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.840 BaseBdev3_malloc 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.840 true 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.840 [2024-11-17 01:30:19.167467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:10.840 [2024-11-17 01:30:19.167969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.840 [2024-11-17 01:30:19.168056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:10.840 [2024-11-17 01:30:19.168113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.840 [2024-11-17 01:30:19.170265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.840 [2024-11-17 01:30:19.170400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:10.840 BaseBdev3 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.840 [2024-11-17 01:30:19.179502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.840 [2024-11-17 01:30:19.181313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.840 [2024-11-17 01:30:19.181413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.840 [2024-11-17 01:30:19.181611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:10.840 [2024-11-17 01:30:19.181634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:10.840 [2024-11-17 01:30:19.181900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:10.840 [2024-11-17 01:30:19.182072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:10.840 [2024-11-17 01:30:19.182093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:10.840 [2024-11-17 01:30:19.182253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.840 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.841 "name": "raid_bdev1", 00:10:10.841 "uuid": "6a269373-00c3-4dda-9cf6-b47a7ad4ae00", 00:10:10.841 "strip_size_kb": 64, 00:10:10.841 "state": "online", 00:10:10.841 "raid_level": "concat", 00:10:10.841 "superblock": true, 00:10:10.841 "num_base_bdevs": 3, 00:10:10.841 "num_base_bdevs_discovered": 3, 00:10:10.841 "num_base_bdevs_operational": 3, 00:10:10.841 "base_bdevs_list": [ 00:10:10.841 { 00:10:10.841 "name": "BaseBdev1", 00:10:10.841 "uuid": "eebd3d40-4c2d-5dec-8560-0e1435ed91dd", 00:10:10.841 "is_configured": true, 00:10:10.841 "data_offset": 2048, 00:10:10.841 "data_size": 63488 00:10:10.841 }, 00:10:10.841 { 00:10:10.841 "name": "BaseBdev2", 00:10:10.841 "uuid": "d3b4f6eb-e848-59d8-9af9-f7763ed01bc5", 00:10:10.841 "is_configured": true, 00:10:10.841 "data_offset": 2048, 00:10:10.841 "data_size": 63488 00:10:10.841 }, 00:10:10.841 { 00:10:10.841 "name": "BaseBdev3", 00:10:10.841 "uuid": "7f08e43a-c25c-57b2-8617-7d9fc28cfb69", 00:10:10.841 "is_configured": true, 00:10:10.841 "data_offset": 2048, 00:10:10.841 "data_size": 63488 00:10:10.841 } 00:10:10.841 ] 00:10:10.841 }' 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.841 01:30:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.406 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:11.406 01:30:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:11.406 [2024-11-17 01:30:19.679794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:12.356 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.357 "name": "raid_bdev1", 00:10:12.357 "uuid": "6a269373-00c3-4dda-9cf6-b47a7ad4ae00", 00:10:12.357 "strip_size_kb": 64, 00:10:12.357 "state": "online", 00:10:12.357 "raid_level": "concat", 00:10:12.357 "superblock": true, 00:10:12.357 "num_base_bdevs": 3, 00:10:12.357 "num_base_bdevs_discovered": 3, 00:10:12.357 "num_base_bdevs_operational": 3, 00:10:12.357 "base_bdevs_list": [ 00:10:12.357 { 00:10:12.357 "name": "BaseBdev1", 00:10:12.357 "uuid": "eebd3d40-4c2d-5dec-8560-0e1435ed91dd", 00:10:12.357 "is_configured": true, 00:10:12.357 "data_offset": 2048, 00:10:12.357 "data_size": 63488 00:10:12.357 }, 00:10:12.357 { 00:10:12.357 "name": "BaseBdev2", 00:10:12.357 "uuid": "d3b4f6eb-e848-59d8-9af9-f7763ed01bc5", 00:10:12.357 "is_configured": true, 00:10:12.357 "data_offset": 2048, 00:10:12.357 "data_size": 63488 00:10:12.357 }, 00:10:12.357 { 00:10:12.357 "name": "BaseBdev3", 00:10:12.357 "uuid": "7f08e43a-c25c-57b2-8617-7d9fc28cfb69", 00:10:12.357 "is_configured": true, 00:10:12.357 "data_offset": 2048, 00:10:12.357 "data_size": 63488 00:10:12.357 } 00:10:12.357 ] 00:10:12.357 }' 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.357 01:30:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.617 [2024-11-17 01:30:21.053943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.617 [2024-11-17 01:30:21.053983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.617 [2024-11-17 01:30:21.056478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.617 [2024-11-17 01:30:21.056528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.617 [2024-11-17 01:30:21.056565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.617 [2024-11-17 01:30:21.056576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:12.617 { 00:10:12.617 "results": [ 00:10:12.617 { 00:10:12.617 "job": "raid_bdev1", 00:10:12.617 "core_mask": "0x1", 00:10:12.617 "workload": "randrw", 00:10:12.617 "percentage": 50, 00:10:12.617 "status": "finished", 00:10:12.617 "queue_depth": 1, 00:10:12.617 "io_size": 131072, 00:10:12.617 "runtime": 1.375038, 00:10:12.617 "iops": 16522.452470404452, 00:10:12.617 "mibps": 2065.3065588005566, 00:10:12.617 "io_failed": 1, 00:10:12.617 "io_timeout": 0, 00:10:12.617 "avg_latency_us": 84.15928654898826, 00:10:12.617 "min_latency_us": 25.6, 00:10:12.617 "max_latency_us": 1373.6803493449781 00:10:12.617 } 00:10:12.617 ], 00:10:12.617 "core_count": 1 00:10:12.617 } 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67065 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67065 ']' 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67065 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.617 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67065 00:10:12.877 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.877 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.877 killing process with pid 67065 00:10:12.877 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67065' 00:10:12.877 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67065 00:10:12.877 [2024-11-17 01:30:21.101582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.877 01:30:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67065 00:10:12.877 [2024-11-17 01:30:21.330519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WpvRrBmmCr 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:14.256 00:10:14.256 real 0m4.427s 00:10:14.256 user 0m5.197s 00:10:14.256 sys 0m0.577s 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.256 01:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.256 ************************************ 00:10:14.256 END TEST raid_write_error_test 00:10:14.256 ************************************ 00:10:14.256 01:30:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:14.256 01:30:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:14.256 01:30:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:14.256 01:30:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.256 01:30:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.256 ************************************ 00:10:14.256 START TEST raid_state_function_test 00:10:14.256 ************************************ 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.256 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67209 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67209' 00:10:14.257 Process raid pid: 67209 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67209 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67209 ']' 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.257 01:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.257 [2024-11-17 01:30:22.615765] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:14.257 [2024-11-17 01:30:22.615885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.516 [2024-11-17 01:30:22.786861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.516 [2024-11-17 01:30:22.891731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.775 [2024-11-17 01:30:23.090422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.775 [2024-11-17 01:30:23.090470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.035 [2024-11-17 01:30:23.456282] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.035 [2024-11-17 01:30:23.456331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.035 [2024-11-17 01:30:23.456341] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.035 [2024-11-17 01:30:23.456351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.035 [2024-11-17 01:30:23.456357] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.035 [2024-11-17 01:30:23.456365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.035 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.294 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.294 "name": "Existed_Raid", 00:10:15.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.294 "strip_size_kb": 0, 00:10:15.294 "state": "configuring", 00:10:15.294 "raid_level": "raid1", 00:10:15.294 "superblock": false, 00:10:15.294 "num_base_bdevs": 3, 00:10:15.294 "num_base_bdevs_discovered": 0, 00:10:15.294 "num_base_bdevs_operational": 3, 00:10:15.294 "base_bdevs_list": [ 00:10:15.294 { 00:10:15.294 "name": "BaseBdev1", 00:10:15.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.294 "is_configured": false, 00:10:15.294 "data_offset": 0, 00:10:15.294 "data_size": 0 00:10:15.294 }, 00:10:15.294 { 00:10:15.294 "name": "BaseBdev2", 00:10:15.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.294 "is_configured": false, 00:10:15.294 "data_offset": 0, 00:10:15.294 "data_size": 0 00:10:15.294 }, 00:10:15.294 { 00:10:15.294 "name": "BaseBdev3", 00:10:15.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.294 "is_configured": false, 00:10:15.294 "data_offset": 0, 00:10:15.294 "data_size": 0 00:10:15.294 } 00:10:15.294 ] 00:10:15.294 }' 00:10:15.294 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.294 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.553 [2024-11-17 01:30:23.931445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.553 [2024-11-17 01:30:23.931489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.553 [2024-11-17 01:30:23.943398] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.553 [2024-11-17 01:30:23.943440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.553 [2024-11-17 01:30:23.943448] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.553 [2024-11-17 01:30:23.943457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.553 [2024-11-17 01:30:23.943463] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.553 [2024-11-17 01:30:23.943471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.553 [2024-11-17 01:30:23.992560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.553 BaseBdev1 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.553 01:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.553 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.553 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.553 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.553 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.813 [ 00:10:15.813 { 00:10:15.813 "name": "BaseBdev1", 00:10:15.813 "aliases": [ 00:10:15.813 "dade9a6a-433e-4d67-a1fb-d393c601eee6" 00:10:15.813 ], 00:10:15.813 "product_name": "Malloc disk", 00:10:15.813 "block_size": 512, 00:10:15.813 "num_blocks": 65536, 00:10:15.813 "uuid": "dade9a6a-433e-4d67-a1fb-d393c601eee6", 00:10:15.813 "assigned_rate_limits": { 00:10:15.813 "rw_ios_per_sec": 0, 00:10:15.813 "rw_mbytes_per_sec": 0, 00:10:15.813 "r_mbytes_per_sec": 0, 00:10:15.813 "w_mbytes_per_sec": 0 00:10:15.813 }, 00:10:15.813 "claimed": true, 00:10:15.813 "claim_type": "exclusive_write", 00:10:15.813 "zoned": false, 00:10:15.813 "supported_io_types": { 00:10:15.813 "read": true, 00:10:15.813 "write": true, 00:10:15.813 "unmap": true, 00:10:15.813 "flush": true, 00:10:15.813 "reset": true, 00:10:15.813 "nvme_admin": false, 00:10:15.813 "nvme_io": false, 00:10:15.813 "nvme_io_md": false, 00:10:15.813 "write_zeroes": true, 00:10:15.813 "zcopy": true, 00:10:15.813 "get_zone_info": false, 00:10:15.813 "zone_management": false, 00:10:15.813 "zone_append": false, 00:10:15.813 "compare": false, 00:10:15.813 "compare_and_write": false, 00:10:15.813 "abort": true, 00:10:15.813 "seek_hole": false, 00:10:15.813 "seek_data": false, 00:10:15.813 "copy": true, 00:10:15.813 "nvme_iov_md": false 00:10:15.813 }, 00:10:15.813 "memory_domains": [ 00:10:15.813 { 00:10:15.813 "dma_device_id": "system", 00:10:15.813 "dma_device_type": 1 00:10:15.813 }, 00:10:15.813 { 00:10:15.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.813 "dma_device_type": 2 00:10:15.813 } 00:10:15.813 ], 00:10:15.813 "driver_specific": {} 00:10:15.813 } 00:10:15.813 ] 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.813 "name": "Existed_Raid", 00:10:15.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.813 "strip_size_kb": 0, 00:10:15.813 "state": "configuring", 00:10:15.813 "raid_level": "raid1", 00:10:15.813 "superblock": false, 00:10:15.813 "num_base_bdevs": 3, 00:10:15.813 "num_base_bdevs_discovered": 1, 00:10:15.813 "num_base_bdevs_operational": 3, 00:10:15.813 "base_bdevs_list": [ 00:10:15.813 { 00:10:15.813 "name": "BaseBdev1", 00:10:15.813 "uuid": "dade9a6a-433e-4d67-a1fb-d393c601eee6", 00:10:15.813 "is_configured": true, 00:10:15.813 "data_offset": 0, 00:10:15.813 "data_size": 65536 00:10:15.813 }, 00:10:15.813 { 00:10:15.813 "name": "BaseBdev2", 00:10:15.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.813 "is_configured": false, 00:10:15.813 "data_offset": 0, 00:10:15.813 "data_size": 0 00:10:15.813 }, 00:10:15.813 { 00:10:15.813 "name": "BaseBdev3", 00:10:15.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.813 "is_configured": false, 00:10:15.813 "data_offset": 0, 00:10:15.813 "data_size": 0 00:10:15.813 } 00:10:15.813 ] 00:10:15.813 }' 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.813 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.073 [2024-11-17 01:30:24.479770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.073 [2024-11-17 01:30:24.479845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.073 [2024-11-17 01:30:24.491796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.073 [2024-11-17 01:30:24.493608] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.073 [2024-11-17 01:30:24.493647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.073 [2024-11-17 01:30:24.493673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.073 [2024-11-17 01:30:24.493683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.073 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.074 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.333 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.333 "name": "Existed_Raid", 00:10:16.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.333 "strip_size_kb": 0, 00:10:16.333 "state": "configuring", 00:10:16.333 "raid_level": "raid1", 00:10:16.333 "superblock": false, 00:10:16.333 "num_base_bdevs": 3, 00:10:16.333 "num_base_bdevs_discovered": 1, 00:10:16.333 "num_base_bdevs_operational": 3, 00:10:16.333 "base_bdevs_list": [ 00:10:16.333 { 00:10:16.333 "name": "BaseBdev1", 00:10:16.333 "uuid": "dade9a6a-433e-4d67-a1fb-d393c601eee6", 00:10:16.333 "is_configured": true, 00:10:16.333 "data_offset": 0, 00:10:16.333 "data_size": 65536 00:10:16.333 }, 00:10:16.333 { 00:10:16.333 "name": "BaseBdev2", 00:10:16.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.333 "is_configured": false, 00:10:16.333 "data_offset": 0, 00:10:16.333 "data_size": 0 00:10:16.333 }, 00:10:16.333 { 00:10:16.333 "name": "BaseBdev3", 00:10:16.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.333 "is_configured": false, 00:10:16.333 "data_offset": 0, 00:10:16.333 "data_size": 0 00:10:16.333 } 00:10:16.333 ] 00:10:16.333 }' 00:10:16.333 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.333 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.592 [2024-11-17 01:30:24.956785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.592 BaseBdev2 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.592 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.592 [ 00:10:16.592 { 00:10:16.592 "name": "BaseBdev2", 00:10:16.592 "aliases": [ 00:10:16.592 "d9ca484a-0bb6-4c97-b3b6-c0c5ee783aa5" 00:10:16.592 ], 00:10:16.592 "product_name": "Malloc disk", 00:10:16.592 "block_size": 512, 00:10:16.592 "num_blocks": 65536, 00:10:16.593 "uuid": "d9ca484a-0bb6-4c97-b3b6-c0c5ee783aa5", 00:10:16.593 "assigned_rate_limits": { 00:10:16.593 "rw_ios_per_sec": 0, 00:10:16.593 "rw_mbytes_per_sec": 0, 00:10:16.593 "r_mbytes_per_sec": 0, 00:10:16.593 "w_mbytes_per_sec": 0 00:10:16.593 }, 00:10:16.593 "claimed": true, 00:10:16.593 "claim_type": "exclusive_write", 00:10:16.593 "zoned": false, 00:10:16.593 "supported_io_types": { 00:10:16.593 "read": true, 00:10:16.593 "write": true, 00:10:16.593 "unmap": true, 00:10:16.593 "flush": true, 00:10:16.593 "reset": true, 00:10:16.593 "nvme_admin": false, 00:10:16.593 "nvme_io": false, 00:10:16.593 "nvme_io_md": false, 00:10:16.593 "write_zeroes": true, 00:10:16.593 "zcopy": true, 00:10:16.593 "get_zone_info": false, 00:10:16.593 "zone_management": false, 00:10:16.593 "zone_append": false, 00:10:16.593 "compare": false, 00:10:16.593 "compare_and_write": false, 00:10:16.593 "abort": true, 00:10:16.593 "seek_hole": false, 00:10:16.593 "seek_data": false, 00:10:16.593 "copy": true, 00:10:16.593 "nvme_iov_md": false 00:10:16.593 }, 00:10:16.593 "memory_domains": [ 00:10:16.593 { 00:10:16.593 "dma_device_id": "system", 00:10:16.593 "dma_device_type": 1 00:10:16.593 }, 00:10:16.593 { 00:10:16.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.593 "dma_device_type": 2 00:10:16.593 } 00:10:16.593 ], 00:10:16.593 "driver_specific": {} 00:10:16.593 } 00:10:16.593 ] 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.593 01:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.593 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.593 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.593 "name": "Existed_Raid", 00:10:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.593 "strip_size_kb": 0, 00:10:16.593 "state": "configuring", 00:10:16.593 "raid_level": "raid1", 00:10:16.593 "superblock": false, 00:10:16.593 "num_base_bdevs": 3, 00:10:16.593 "num_base_bdevs_discovered": 2, 00:10:16.593 "num_base_bdevs_operational": 3, 00:10:16.593 "base_bdevs_list": [ 00:10:16.593 { 00:10:16.593 "name": "BaseBdev1", 00:10:16.593 "uuid": "dade9a6a-433e-4d67-a1fb-d393c601eee6", 00:10:16.593 "is_configured": true, 00:10:16.593 "data_offset": 0, 00:10:16.593 "data_size": 65536 00:10:16.593 }, 00:10:16.593 { 00:10:16.593 "name": "BaseBdev2", 00:10:16.593 "uuid": "d9ca484a-0bb6-4c97-b3b6-c0c5ee783aa5", 00:10:16.593 "is_configured": true, 00:10:16.593 "data_offset": 0, 00:10:16.593 "data_size": 65536 00:10:16.593 }, 00:10:16.593 { 00:10:16.593 "name": "BaseBdev3", 00:10:16.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.593 "is_configured": false, 00:10:16.593 "data_offset": 0, 00:10:16.593 "data_size": 0 00:10:16.593 } 00:10:16.593 ] 00:10:16.593 }' 00:10:16.593 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.593 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.161 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.161 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.161 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.161 [2024-11-17 01:30:25.493464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.161 [2024-11-17 01:30:25.493521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:17.161 [2024-11-17 01:30:25.493534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:17.161 [2024-11-17 01:30:25.493818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:17.161 [2024-11-17 01:30:25.493987] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:17.161 [2024-11-17 01:30:25.494004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:17.161 [2024-11-17 01:30:25.494253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.161 BaseBdev3 00:10:17.161 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.161 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:17.161 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:17.161 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.162 [ 00:10:17.162 { 00:10:17.162 "name": "BaseBdev3", 00:10:17.162 "aliases": [ 00:10:17.162 "ee876b12-324a-4f3a-820e-2537b623b6f1" 00:10:17.162 ], 00:10:17.162 "product_name": "Malloc disk", 00:10:17.162 "block_size": 512, 00:10:17.162 "num_blocks": 65536, 00:10:17.162 "uuid": "ee876b12-324a-4f3a-820e-2537b623b6f1", 00:10:17.162 "assigned_rate_limits": { 00:10:17.162 "rw_ios_per_sec": 0, 00:10:17.162 "rw_mbytes_per_sec": 0, 00:10:17.162 "r_mbytes_per_sec": 0, 00:10:17.162 "w_mbytes_per_sec": 0 00:10:17.162 }, 00:10:17.162 "claimed": true, 00:10:17.162 "claim_type": "exclusive_write", 00:10:17.162 "zoned": false, 00:10:17.162 "supported_io_types": { 00:10:17.162 "read": true, 00:10:17.162 "write": true, 00:10:17.162 "unmap": true, 00:10:17.162 "flush": true, 00:10:17.162 "reset": true, 00:10:17.162 "nvme_admin": false, 00:10:17.162 "nvme_io": false, 00:10:17.162 "nvme_io_md": false, 00:10:17.162 "write_zeroes": true, 00:10:17.162 "zcopy": true, 00:10:17.162 "get_zone_info": false, 00:10:17.162 "zone_management": false, 00:10:17.162 "zone_append": false, 00:10:17.162 "compare": false, 00:10:17.162 "compare_and_write": false, 00:10:17.162 "abort": true, 00:10:17.162 "seek_hole": false, 00:10:17.162 "seek_data": false, 00:10:17.162 "copy": true, 00:10:17.162 "nvme_iov_md": false 00:10:17.162 }, 00:10:17.162 "memory_domains": [ 00:10:17.162 { 00:10:17.162 "dma_device_id": "system", 00:10:17.162 "dma_device_type": 1 00:10:17.162 }, 00:10:17.162 { 00:10:17.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.162 "dma_device_type": 2 00:10:17.162 } 00:10:17.162 ], 00:10:17.162 "driver_specific": {} 00:10:17.162 } 00:10:17.162 ] 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.162 "name": "Existed_Raid", 00:10:17.162 "uuid": "b8101085-a4c5-4110-9ffe-049f506bce38", 00:10:17.162 "strip_size_kb": 0, 00:10:17.162 "state": "online", 00:10:17.162 "raid_level": "raid1", 00:10:17.162 "superblock": false, 00:10:17.162 "num_base_bdevs": 3, 00:10:17.162 "num_base_bdevs_discovered": 3, 00:10:17.162 "num_base_bdevs_operational": 3, 00:10:17.162 "base_bdevs_list": [ 00:10:17.162 { 00:10:17.162 "name": "BaseBdev1", 00:10:17.162 "uuid": "dade9a6a-433e-4d67-a1fb-d393c601eee6", 00:10:17.162 "is_configured": true, 00:10:17.162 "data_offset": 0, 00:10:17.162 "data_size": 65536 00:10:17.162 }, 00:10:17.162 { 00:10:17.162 "name": "BaseBdev2", 00:10:17.162 "uuid": "d9ca484a-0bb6-4c97-b3b6-c0c5ee783aa5", 00:10:17.162 "is_configured": true, 00:10:17.162 "data_offset": 0, 00:10:17.162 "data_size": 65536 00:10:17.162 }, 00:10:17.162 { 00:10:17.162 "name": "BaseBdev3", 00:10:17.162 "uuid": "ee876b12-324a-4f3a-820e-2537b623b6f1", 00:10:17.162 "is_configured": true, 00:10:17.162 "data_offset": 0, 00:10:17.162 "data_size": 65536 00:10:17.162 } 00:10:17.162 ] 00:10:17.162 }' 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.162 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.739 [2024-11-17 01:30:25.973052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.739 01:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.739 "name": "Existed_Raid", 00:10:17.739 "aliases": [ 00:10:17.739 "b8101085-a4c5-4110-9ffe-049f506bce38" 00:10:17.739 ], 00:10:17.739 "product_name": "Raid Volume", 00:10:17.739 "block_size": 512, 00:10:17.739 "num_blocks": 65536, 00:10:17.739 "uuid": "b8101085-a4c5-4110-9ffe-049f506bce38", 00:10:17.739 "assigned_rate_limits": { 00:10:17.739 "rw_ios_per_sec": 0, 00:10:17.739 "rw_mbytes_per_sec": 0, 00:10:17.739 "r_mbytes_per_sec": 0, 00:10:17.739 "w_mbytes_per_sec": 0 00:10:17.739 }, 00:10:17.739 "claimed": false, 00:10:17.739 "zoned": false, 00:10:17.739 "supported_io_types": { 00:10:17.739 "read": true, 00:10:17.739 "write": true, 00:10:17.739 "unmap": false, 00:10:17.739 "flush": false, 00:10:17.739 "reset": true, 00:10:17.739 "nvme_admin": false, 00:10:17.739 "nvme_io": false, 00:10:17.739 "nvme_io_md": false, 00:10:17.739 "write_zeroes": true, 00:10:17.739 "zcopy": false, 00:10:17.739 "get_zone_info": false, 00:10:17.739 "zone_management": false, 00:10:17.739 "zone_append": false, 00:10:17.739 "compare": false, 00:10:17.739 "compare_and_write": false, 00:10:17.739 "abort": false, 00:10:17.739 "seek_hole": false, 00:10:17.739 "seek_data": false, 00:10:17.739 "copy": false, 00:10:17.739 "nvme_iov_md": false 00:10:17.739 }, 00:10:17.739 "memory_domains": [ 00:10:17.739 { 00:10:17.739 "dma_device_id": "system", 00:10:17.739 "dma_device_type": 1 00:10:17.739 }, 00:10:17.739 { 00:10:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.739 "dma_device_type": 2 00:10:17.739 }, 00:10:17.739 { 00:10:17.739 "dma_device_id": "system", 00:10:17.739 "dma_device_type": 1 00:10:17.739 }, 00:10:17.739 { 00:10:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.739 "dma_device_type": 2 00:10:17.739 }, 00:10:17.739 { 00:10:17.739 "dma_device_id": "system", 00:10:17.739 "dma_device_type": 1 00:10:17.739 }, 00:10:17.739 { 00:10:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.739 "dma_device_type": 2 00:10:17.739 } 00:10:17.739 ], 00:10:17.739 "driver_specific": { 00:10:17.739 "raid": { 00:10:17.739 "uuid": "b8101085-a4c5-4110-9ffe-049f506bce38", 00:10:17.739 "strip_size_kb": 0, 00:10:17.739 "state": "online", 00:10:17.739 "raid_level": "raid1", 00:10:17.739 "superblock": false, 00:10:17.739 "num_base_bdevs": 3, 00:10:17.739 "num_base_bdevs_discovered": 3, 00:10:17.739 "num_base_bdevs_operational": 3, 00:10:17.739 "base_bdevs_list": [ 00:10:17.739 { 00:10:17.739 "name": "BaseBdev1", 00:10:17.739 "uuid": "dade9a6a-433e-4d67-a1fb-d393c601eee6", 00:10:17.739 "is_configured": true, 00:10:17.739 "data_offset": 0, 00:10:17.739 "data_size": 65536 00:10:17.739 }, 00:10:17.739 { 00:10:17.739 "name": "BaseBdev2", 00:10:17.739 "uuid": "d9ca484a-0bb6-4c97-b3b6-c0c5ee783aa5", 00:10:17.739 "is_configured": true, 00:10:17.739 "data_offset": 0, 00:10:17.739 "data_size": 65536 00:10:17.739 }, 00:10:17.739 { 00:10:17.739 "name": "BaseBdev3", 00:10:17.739 "uuid": "ee876b12-324a-4f3a-820e-2537b623b6f1", 00:10:17.739 "is_configured": true, 00:10:17.739 "data_offset": 0, 00:10:17.739 "data_size": 65536 00:10:17.739 } 00:10:17.739 ] 00:10:17.739 } 00:10:17.739 } 00:10:17.739 }' 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.739 BaseBdev2 00:10:17.739 BaseBdev3' 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.739 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.740 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 [2024-11-17 01:30:26.244250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.055 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.056 "name": "Existed_Raid", 00:10:18.056 "uuid": "b8101085-a4c5-4110-9ffe-049f506bce38", 00:10:18.056 "strip_size_kb": 0, 00:10:18.056 "state": "online", 00:10:18.056 "raid_level": "raid1", 00:10:18.056 "superblock": false, 00:10:18.056 "num_base_bdevs": 3, 00:10:18.056 "num_base_bdevs_discovered": 2, 00:10:18.056 "num_base_bdevs_operational": 2, 00:10:18.056 "base_bdevs_list": [ 00:10:18.056 { 00:10:18.056 "name": null, 00:10:18.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.056 "is_configured": false, 00:10:18.056 "data_offset": 0, 00:10:18.056 "data_size": 65536 00:10:18.056 }, 00:10:18.056 { 00:10:18.056 "name": "BaseBdev2", 00:10:18.056 "uuid": "d9ca484a-0bb6-4c97-b3b6-c0c5ee783aa5", 00:10:18.056 "is_configured": true, 00:10:18.056 "data_offset": 0, 00:10:18.056 "data_size": 65536 00:10:18.056 }, 00:10:18.056 { 00:10:18.056 "name": "BaseBdev3", 00:10:18.056 "uuid": "ee876b12-324a-4f3a-820e-2537b623b6f1", 00:10:18.056 "is_configured": true, 00:10:18.056 "data_offset": 0, 00:10:18.056 "data_size": 65536 00:10:18.056 } 00:10:18.056 ] 00:10:18.056 }' 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.056 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.323 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:18.323 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.323 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.323 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.323 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.323 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.583 [2024-11-17 01:30:26.832562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.583 01:30:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.583 [2024-11-17 01:30:26.987151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.583 [2024-11-17 01:30:26.987249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.843 [2024-11-17 01:30:27.081410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.843 [2024-11-17 01:30:27.081471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.843 [2024-11-17 01:30:27.081484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.843 BaseBdev2 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.843 [ 00:10:18.843 { 00:10:18.843 "name": "BaseBdev2", 00:10:18.843 "aliases": [ 00:10:18.843 "25a76bd2-7d60-4680-ad4b-1496fd1dec35" 00:10:18.843 ], 00:10:18.843 "product_name": "Malloc disk", 00:10:18.843 "block_size": 512, 00:10:18.843 "num_blocks": 65536, 00:10:18.843 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:18.843 "assigned_rate_limits": { 00:10:18.843 "rw_ios_per_sec": 0, 00:10:18.843 "rw_mbytes_per_sec": 0, 00:10:18.843 "r_mbytes_per_sec": 0, 00:10:18.843 "w_mbytes_per_sec": 0 00:10:18.843 }, 00:10:18.843 "claimed": false, 00:10:18.843 "zoned": false, 00:10:18.843 "supported_io_types": { 00:10:18.843 "read": true, 00:10:18.843 "write": true, 00:10:18.843 "unmap": true, 00:10:18.843 "flush": true, 00:10:18.843 "reset": true, 00:10:18.843 "nvme_admin": false, 00:10:18.843 "nvme_io": false, 00:10:18.843 "nvme_io_md": false, 00:10:18.843 "write_zeroes": true, 00:10:18.843 "zcopy": true, 00:10:18.843 "get_zone_info": false, 00:10:18.843 "zone_management": false, 00:10:18.843 "zone_append": false, 00:10:18.843 "compare": false, 00:10:18.843 "compare_and_write": false, 00:10:18.843 "abort": true, 00:10:18.843 "seek_hole": false, 00:10:18.843 "seek_data": false, 00:10:18.843 "copy": true, 00:10:18.843 "nvme_iov_md": false 00:10:18.843 }, 00:10:18.843 "memory_domains": [ 00:10:18.843 { 00:10:18.843 "dma_device_id": "system", 00:10:18.843 "dma_device_type": 1 00:10:18.843 }, 00:10:18.843 { 00:10:18.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.843 "dma_device_type": 2 00:10:18.843 } 00:10:18.843 ], 00:10:18.843 "driver_specific": {} 00:10:18.843 } 00:10:18.843 ] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.843 BaseBdev3 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.843 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.843 [ 00:10:18.843 { 00:10:18.843 "name": "BaseBdev3", 00:10:18.843 "aliases": [ 00:10:18.843 "c0e33c9c-36b8-4475-86e3-333ab5b16278" 00:10:18.843 ], 00:10:18.843 "product_name": "Malloc disk", 00:10:18.843 "block_size": 512, 00:10:18.843 "num_blocks": 65536, 00:10:18.843 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:18.843 "assigned_rate_limits": { 00:10:18.843 "rw_ios_per_sec": 0, 00:10:18.843 "rw_mbytes_per_sec": 0, 00:10:18.843 "r_mbytes_per_sec": 0, 00:10:18.843 "w_mbytes_per_sec": 0 00:10:18.843 }, 00:10:18.843 "claimed": false, 00:10:18.843 "zoned": false, 00:10:18.843 "supported_io_types": { 00:10:18.843 "read": true, 00:10:18.843 "write": true, 00:10:18.843 "unmap": true, 00:10:18.843 "flush": true, 00:10:18.843 "reset": true, 00:10:18.843 "nvme_admin": false, 00:10:18.843 "nvme_io": false, 00:10:18.843 "nvme_io_md": false, 00:10:18.843 "write_zeroes": true, 00:10:18.843 "zcopy": true, 00:10:18.843 "get_zone_info": false, 00:10:18.843 "zone_management": false, 00:10:18.843 "zone_append": false, 00:10:18.843 "compare": false, 00:10:18.843 "compare_and_write": false, 00:10:18.843 "abort": true, 00:10:18.843 "seek_hole": false, 00:10:18.843 "seek_data": false, 00:10:18.843 "copy": true, 00:10:18.843 "nvme_iov_md": false 00:10:18.843 }, 00:10:18.843 "memory_domains": [ 00:10:18.843 { 00:10:18.844 "dma_device_id": "system", 00:10:18.844 "dma_device_type": 1 00:10:18.844 }, 00:10:18.844 { 00:10:18.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.844 "dma_device_type": 2 00:10:18.844 } 00:10:18.844 ], 00:10:18.844 "driver_specific": {} 00:10:18.844 } 00:10:18.844 ] 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.844 [2024-11-17 01:30:27.295520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.844 [2024-11-17 01:30:27.295569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.844 [2024-11-17 01:30:27.295589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.844 [2024-11-17 01:30:27.297382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.844 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.103 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.103 "name": "Existed_Raid", 00:10:19.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.103 "strip_size_kb": 0, 00:10:19.103 "state": "configuring", 00:10:19.103 "raid_level": "raid1", 00:10:19.103 "superblock": false, 00:10:19.103 "num_base_bdevs": 3, 00:10:19.103 "num_base_bdevs_discovered": 2, 00:10:19.103 "num_base_bdevs_operational": 3, 00:10:19.103 "base_bdevs_list": [ 00:10:19.103 { 00:10:19.103 "name": "BaseBdev1", 00:10:19.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.103 "is_configured": false, 00:10:19.104 "data_offset": 0, 00:10:19.104 "data_size": 0 00:10:19.104 }, 00:10:19.104 { 00:10:19.104 "name": "BaseBdev2", 00:10:19.104 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:19.104 "is_configured": true, 00:10:19.104 "data_offset": 0, 00:10:19.104 "data_size": 65536 00:10:19.104 }, 00:10:19.104 { 00:10:19.104 "name": "BaseBdev3", 00:10:19.104 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:19.104 "is_configured": true, 00:10:19.104 "data_offset": 0, 00:10:19.104 "data_size": 65536 00:10:19.104 } 00:10:19.104 ] 00:10:19.104 }' 00:10:19.104 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.104 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.364 [2024-11-17 01:30:27.726827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.364 "name": "Existed_Raid", 00:10:19.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.364 "strip_size_kb": 0, 00:10:19.364 "state": "configuring", 00:10:19.364 "raid_level": "raid1", 00:10:19.364 "superblock": false, 00:10:19.364 "num_base_bdevs": 3, 00:10:19.364 "num_base_bdevs_discovered": 1, 00:10:19.364 "num_base_bdevs_operational": 3, 00:10:19.364 "base_bdevs_list": [ 00:10:19.364 { 00:10:19.364 "name": "BaseBdev1", 00:10:19.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.364 "is_configured": false, 00:10:19.364 "data_offset": 0, 00:10:19.364 "data_size": 0 00:10:19.364 }, 00:10:19.364 { 00:10:19.364 "name": null, 00:10:19.364 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:19.364 "is_configured": false, 00:10:19.364 "data_offset": 0, 00:10:19.364 "data_size": 65536 00:10:19.364 }, 00:10:19.364 { 00:10:19.364 "name": "BaseBdev3", 00:10:19.364 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:19.364 "is_configured": true, 00:10:19.364 "data_offset": 0, 00:10:19.364 "data_size": 65536 00:10:19.364 } 00:10:19.364 ] 00:10:19.364 }' 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.364 01:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.933 [2024-11-17 01:30:28.254665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.933 BaseBdev1 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.933 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.933 [ 00:10:19.933 { 00:10:19.933 "name": "BaseBdev1", 00:10:19.933 "aliases": [ 00:10:19.933 "884ce788-53bf-4bf5-9e36-31a6fc003205" 00:10:19.933 ], 00:10:19.933 "product_name": "Malloc disk", 00:10:19.933 "block_size": 512, 00:10:19.933 "num_blocks": 65536, 00:10:19.933 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:19.933 "assigned_rate_limits": { 00:10:19.933 "rw_ios_per_sec": 0, 00:10:19.933 "rw_mbytes_per_sec": 0, 00:10:19.933 "r_mbytes_per_sec": 0, 00:10:19.933 "w_mbytes_per_sec": 0 00:10:19.933 }, 00:10:19.933 "claimed": true, 00:10:19.933 "claim_type": "exclusive_write", 00:10:19.933 "zoned": false, 00:10:19.933 "supported_io_types": { 00:10:19.933 "read": true, 00:10:19.933 "write": true, 00:10:19.934 "unmap": true, 00:10:19.934 "flush": true, 00:10:19.934 "reset": true, 00:10:19.934 "nvme_admin": false, 00:10:19.934 "nvme_io": false, 00:10:19.934 "nvme_io_md": false, 00:10:19.934 "write_zeroes": true, 00:10:19.934 "zcopy": true, 00:10:19.934 "get_zone_info": false, 00:10:19.934 "zone_management": false, 00:10:19.934 "zone_append": false, 00:10:19.934 "compare": false, 00:10:19.934 "compare_and_write": false, 00:10:19.934 "abort": true, 00:10:19.934 "seek_hole": false, 00:10:19.934 "seek_data": false, 00:10:19.934 "copy": true, 00:10:19.934 "nvme_iov_md": false 00:10:19.934 }, 00:10:19.934 "memory_domains": [ 00:10:19.934 { 00:10:19.934 "dma_device_id": "system", 00:10:19.934 "dma_device_type": 1 00:10:19.934 }, 00:10:19.934 { 00:10:19.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.934 "dma_device_type": 2 00:10:19.934 } 00:10:19.934 ], 00:10:19.934 "driver_specific": {} 00:10:19.934 } 00:10:19.934 ] 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.934 "name": "Existed_Raid", 00:10:19.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.934 "strip_size_kb": 0, 00:10:19.934 "state": "configuring", 00:10:19.934 "raid_level": "raid1", 00:10:19.934 "superblock": false, 00:10:19.934 "num_base_bdevs": 3, 00:10:19.934 "num_base_bdevs_discovered": 2, 00:10:19.934 "num_base_bdevs_operational": 3, 00:10:19.934 "base_bdevs_list": [ 00:10:19.934 { 00:10:19.934 "name": "BaseBdev1", 00:10:19.934 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:19.934 "is_configured": true, 00:10:19.934 "data_offset": 0, 00:10:19.934 "data_size": 65536 00:10:19.934 }, 00:10:19.934 { 00:10:19.934 "name": null, 00:10:19.934 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:19.934 "is_configured": false, 00:10:19.934 "data_offset": 0, 00:10:19.934 "data_size": 65536 00:10:19.934 }, 00:10:19.934 { 00:10:19.934 "name": "BaseBdev3", 00:10:19.934 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:19.934 "is_configured": true, 00:10:19.934 "data_offset": 0, 00:10:19.934 "data_size": 65536 00:10:19.934 } 00:10:19.934 ] 00:10:19.934 }' 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.934 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.503 [2024-11-17 01:30:28.797755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.503 "name": "Existed_Raid", 00:10:20.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.503 "strip_size_kb": 0, 00:10:20.503 "state": "configuring", 00:10:20.503 "raid_level": "raid1", 00:10:20.503 "superblock": false, 00:10:20.503 "num_base_bdevs": 3, 00:10:20.503 "num_base_bdevs_discovered": 1, 00:10:20.503 "num_base_bdevs_operational": 3, 00:10:20.503 "base_bdevs_list": [ 00:10:20.503 { 00:10:20.503 "name": "BaseBdev1", 00:10:20.503 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:20.503 "is_configured": true, 00:10:20.503 "data_offset": 0, 00:10:20.503 "data_size": 65536 00:10:20.503 }, 00:10:20.503 { 00:10:20.503 "name": null, 00:10:20.503 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:20.503 "is_configured": false, 00:10:20.503 "data_offset": 0, 00:10:20.503 "data_size": 65536 00:10:20.503 }, 00:10:20.503 { 00:10:20.503 "name": null, 00:10:20.503 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:20.503 "is_configured": false, 00:10:20.503 "data_offset": 0, 00:10:20.503 "data_size": 65536 00:10:20.503 } 00:10:20.503 ] 00:10:20.503 }' 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.503 01:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.763 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.763 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.763 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.763 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.022 [2024-11-17 01:30:29.264989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.022 "name": "Existed_Raid", 00:10:21.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.022 "strip_size_kb": 0, 00:10:21.022 "state": "configuring", 00:10:21.022 "raid_level": "raid1", 00:10:21.022 "superblock": false, 00:10:21.022 "num_base_bdevs": 3, 00:10:21.022 "num_base_bdevs_discovered": 2, 00:10:21.022 "num_base_bdevs_operational": 3, 00:10:21.022 "base_bdevs_list": [ 00:10:21.022 { 00:10:21.022 "name": "BaseBdev1", 00:10:21.022 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:21.022 "is_configured": true, 00:10:21.022 "data_offset": 0, 00:10:21.022 "data_size": 65536 00:10:21.022 }, 00:10:21.022 { 00:10:21.022 "name": null, 00:10:21.022 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:21.022 "is_configured": false, 00:10:21.022 "data_offset": 0, 00:10:21.022 "data_size": 65536 00:10:21.022 }, 00:10:21.022 { 00:10:21.022 "name": "BaseBdev3", 00:10:21.022 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:21.022 "is_configured": true, 00:10:21.022 "data_offset": 0, 00:10:21.022 "data_size": 65536 00:10:21.022 } 00:10:21.022 ] 00:10:21.022 }' 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.022 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.281 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.281 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.281 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.281 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.541 [2024-11-17 01:30:29.780117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.541 "name": "Existed_Raid", 00:10:21.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.541 "strip_size_kb": 0, 00:10:21.541 "state": "configuring", 00:10:21.541 "raid_level": "raid1", 00:10:21.541 "superblock": false, 00:10:21.541 "num_base_bdevs": 3, 00:10:21.541 "num_base_bdevs_discovered": 1, 00:10:21.541 "num_base_bdevs_operational": 3, 00:10:21.541 "base_bdevs_list": [ 00:10:21.541 { 00:10:21.541 "name": null, 00:10:21.541 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:21.541 "is_configured": false, 00:10:21.541 "data_offset": 0, 00:10:21.541 "data_size": 65536 00:10:21.541 }, 00:10:21.541 { 00:10:21.541 "name": null, 00:10:21.541 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:21.541 "is_configured": false, 00:10:21.541 "data_offset": 0, 00:10:21.541 "data_size": 65536 00:10:21.541 }, 00:10:21.541 { 00:10:21.541 "name": "BaseBdev3", 00:10:21.541 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:21.541 "is_configured": true, 00:10:21.541 "data_offset": 0, 00:10:21.541 "data_size": 65536 00:10:21.541 } 00:10:21.541 ] 00:10:21.541 }' 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.541 01:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.111 [2024-11-17 01:30:30.325427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.111 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.111 "name": "Existed_Raid", 00:10:22.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.111 "strip_size_kb": 0, 00:10:22.111 "state": "configuring", 00:10:22.111 "raid_level": "raid1", 00:10:22.111 "superblock": false, 00:10:22.111 "num_base_bdevs": 3, 00:10:22.112 "num_base_bdevs_discovered": 2, 00:10:22.112 "num_base_bdevs_operational": 3, 00:10:22.112 "base_bdevs_list": [ 00:10:22.112 { 00:10:22.112 "name": null, 00:10:22.112 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:22.112 "is_configured": false, 00:10:22.112 "data_offset": 0, 00:10:22.112 "data_size": 65536 00:10:22.112 }, 00:10:22.112 { 00:10:22.112 "name": "BaseBdev2", 00:10:22.112 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:22.112 "is_configured": true, 00:10:22.112 "data_offset": 0, 00:10:22.112 "data_size": 65536 00:10:22.112 }, 00:10:22.112 { 00:10:22.112 "name": "BaseBdev3", 00:10:22.112 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:22.112 "is_configured": true, 00:10:22.112 "data_offset": 0, 00:10:22.112 "data_size": 65536 00:10:22.112 } 00:10:22.112 ] 00:10:22.112 }' 00:10:22.112 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.112 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:22.372 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 884ce788-53bf-4bf5-9e36-31a6fc003205 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.633 [2024-11-17 01:30:30.893155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:22.633 [2024-11-17 01:30:30.893230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:22.633 [2024-11-17 01:30:30.893238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:22.633 [2024-11-17 01:30:30.893477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:22.633 [2024-11-17 01:30:30.893636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:22.633 [2024-11-17 01:30:30.893663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:22.633 [2024-11-17 01:30:30.893941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.633 NewBaseBdev 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.633 [ 00:10:22.633 { 00:10:22.633 "name": "NewBaseBdev", 00:10:22.633 "aliases": [ 00:10:22.633 "884ce788-53bf-4bf5-9e36-31a6fc003205" 00:10:22.633 ], 00:10:22.633 "product_name": "Malloc disk", 00:10:22.633 "block_size": 512, 00:10:22.633 "num_blocks": 65536, 00:10:22.633 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:22.633 "assigned_rate_limits": { 00:10:22.633 "rw_ios_per_sec": 0, 00:10:22.633 "rw_mbytes_per_sec": 0, 00:10:22.633 "r_mbytes_per_sec": 0, 00:10:22.633 "w_mbytes_per_sec": 0 00:10:22.633 }, 00:10:22.633 "claimed": true, 00:10:22.633 "claim_type": "exclusive_write", 00:10:22.633 "zoned": false, 00:10:22.633 "supported_io_types": { 00:10:22.633 "read": true, 00:10:22.633 "write": true, 00:10:22.633 "unmap": true, 00:10:22.633 "flush": true, 00:10:22.633 "reset": true, 00:10:22.633 "nvme_admin": false, 00:10:22.633 "nvme_io": false, 00:10:22.633 "nvme_io_md": false, 00:10:22.633 "write_zeroes": true, 00:10:22.633 "zcopy": true, 00:10:22.633 "get_zone_info": false, 00:10:22.633 "zone_management": false, 00:10:22.633 "zone_append": false, 00:10:22.633 "compare": false, 00:10:22.633 "compare_and_write": false, 00:10:22.633 "abort": true, 00:10:22.633 "seek_hole": false, 00:10:22.633 "seek_data": false, 00:10:22.633 "copy": true, 00:10:22.633 "nvme_iov_md": false 00:10:22.633 }, 00:10:22.633 "memory_domains": [ 00:10:22.633 { 00:10:22.633 "dma_device_id": "system", 00:10:22.633 "dma_device_type": 1 00:10:22.633 }, 00:10:22.633 { 00:10:22.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.633 "dma_device_type": 2 00:10:22.633 } 00:10:22.633 ], 00:10:22.633 "driver_specific": {} 00:10:22.633 } 00:10:22.633 ] 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.633 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.633 "name": "Existed_Raid", 00:10:22.633 "uuid": "c1791dd0-c2e9-4c86-ad30-6b31d3a1d3aa", 00:10:22.633 "strip_size_kb": 0, 00:10:22.633 "state": "online", 00:10:22.633 "raid_level": "raid1", 00:10:22.633 "superblock": false, 00:10:22.633 "num_base_bdevs": 3, 00:10:22.633 "num_base_bdevs_discovered": 3, 00:10:22.633 "num_base_bdevs_operational": 3, 00:10:22.633 "base_bdevs_list": [ 00:10:22.633 { 00:10:22.633 "name": "NewBaseBdev", 00:10:22.633 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:22.633 "is_configured": true, 00:10:22.633 "data_offset": 0, 00:10:22.633 "data_size": 65536 00:10:22.633 }, 00:10:22.633 { 00:10:22.633 "name": "BaseBdev2", 00:10:22.633 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:22.633 "is_configured": true, 00:10:22.633 "data_offset": 0, 00:10:22.634 "data_size": 65536 00:10:22.634 }, 00:10:22.634 { 00:10:22.634 "name": "BaseBdev3", 00:10:22.634 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:22.634 "is_configured": true, 00:10:22.634 "data_offset": 0, 00:10:22.634 "data_size": 65536 00:10:22.634 } 00:10:22.634 ] 00:10:22.634 }' 00:10:22.634 01:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.634 01:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.893 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.893 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.894 [2024-11-17 01:30:31.332755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.894 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.154 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.154 "name": "Existed_Raid", 00:10:23.154 "aliases": [ 00:10:23.154 "c1791dd0-c2e9-4c86-ad30-6b31d3a1d3aa" 00:10:23.154 ], 00:10:23.154 "product_name": "Raid Volume", 00:10:23.154 "block_size": 512, 00:10:23.154 "num_blocks": 65536, 00:10:23.154 "uuid": "c1791dd0-c2e9-4c86-ad30-6b31d3a1d3aa", 00:10:23.154 "assigned_rate_limits": { 00:10:23.154 "rw_ios_per_sec": 0, 00:10:23.154 "rw_mbytes_per_sec": 0, 00:10:23.154 "r_mbytes_per_sec": 0, 00:10:23.154 "w_mbytes_per_sec": 0 00:10:23.154 }, 00:10:23.154 "claimed": false, 00:10:23.154 "zoned": false, 00:10:23.154 "supported_io_types": { 00:10:23.154 "read": true, 00:10:23.154 "write": true, 00:10:23.154 "unmap": false, 00:10:23.154 "flush": false, 00:10:23.154 "reset": true, 00:10:23.154 "nvme_admin": false, 00:10:23.154 "nvme_io": false, 00:10:23.154 "nvme_io_md": false, 00:10:23.154 "write_zeroes": true, 00:10:23.154 "zcopy": false, 00:10:23.154 "get_zone_info": false, 00:10:23.154 "zone_management": false, 00:10:23.154 "zone_append": false, 00:10:23.154 "compare": false, 00:10:23.154 "compare_and_write": false, 00:10:23.154 "abort": false, 00:10:23.154 "seek_hole": false, 00:10:23.154 "seek_data": false, 00:10:23.154 "copy": false, 00:10:23.154 "nvme_iov_md": false 00:10:23.154 }, 00:10:23.154 "memory_domains": [ 00:10:23.154 { 00:10:23.154 "dma_device_id": "system", 00:10:23.154 "dma_device_type": 1 00:10:23.154 }, 00:10:23.154 { 00:10:23.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.154 "dma_device_type": 2 00:10:23.154 }, 00:10:23.154 { 00:10:23.154 "dma_device_id": "system", 00:10:23.154 "dma_device_type": 1 00:10:23.154 }, 00:10:23.154 { 00:10:23.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.154 "dma_device_type": 2 00:10:23.154 }, 00:10:23.154 { 00:10:23.154 "dma_device_id": "system", 00:10:23.154 "dma_device_type": 1 00:10:23.154 }, 00:10:23.154 { 00:10:23.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.154 "dma_device_type": 2 00:10:23.154 } 00:10:23.154 ], 00:10:23.154 "driver_specific": { 00:10:23.154 "raid": { 00:10:23.154 "uuid": "c1791dd0-c2e9-4c86-ad30-6b31d3a1d3aa", 00:10:23.154 "strip_size_kb": 0, 00:10:23.154 "state": "online", 00:10:23.154 "raid_level": "raid1", 00:10:23.154 "superblock": false, 00:10:23.154 "num_base_bdevs": 3, 00:10:23.154 "num_base_bdevs_discovered": 3, 00:10:23.154 "num_base_bdevs_operational": 3, 00:10:23.154 "base_bdevs_list": [ 00:10:23.154 { 00:10:23.154 "name": "NewBaseBdev", 00:10:23.154 "uuid": "884ce788-53bf-4bf5-9e36-31a6fc003205", 00:10:23.154 "is_configured": true, 00:10:23.154 "data_offset": 0, 00:10:23.154 "data_size": 65536 00:10:23.154 }, 00:10:23.154 { 00:10:23.154 "name": "BaseBdev2", 00:10:23.154 "uuid": "25a76bd2-7d60-4680-ad4b-1496fd1dec35", 00:10:23.154 "is_configured": true, 00:10:23.154 "data_offset": 0, 00:10:23.154 "data_size": 65536 00:10:23.154 }, 00:10:23.155 { 00:10:23.155 "name": "BaseBdev3", 00:10:23.155 "uuid": "c0e33c9c-36b8-4475-86e3-333ab5b16278", 00:10:23.155 "is_configured": true, 00:10:23.155 "data_offset": 0, 00:10:23.155 "data_size": 65536 00:10:23.155 } 00:10:23.155 ] 00:10:23.155 } 00:10:23.155 } 00:10:23.155 }' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:23.155 BaseBdev2 00:10:23.155 BaseBdev3' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.155 [2024-11-17 01:30:31.572058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.155 [2024-11-17 01:30:31.572094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.155 [2024-11-17 01:30:31.572171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.155 [2024-11-17 01:30:31.572444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.155 [2024-11-17 01:30:31.572463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67209 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67209 ']' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67209 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.155 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67209 00:10:23.415 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.415 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.415 killing process with pid 67209 00:10:23.415 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67209' 00:10:23.415 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67209 00:10:23.415 [2024-11-17 01:30:31.619063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.415 01:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67209 00:10:23.675 [2024-11-17 01:30:31.914567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:24.718 00:10:24.718 real 0m10.484s 00:10:24.718 user 0m16.651s 00:10:24.718 sys 0m1.845s 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.718 ************************************ 00:10:24.718 END TEST raid_state_function_test 00:10:24.718 ************************************ 00:10:24.718 01:30:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:24.718 01:30:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.718 01:30:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.718 01:30:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.718 ************************************ 00:10:24.718 START TEST raid_state_function_test_sb 00:10:24.718 ************************************ 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67825 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67825' 00:10:24.718 Process raid pid: 67825 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67825 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67825 ']' 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.718 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.718 [2024-11-17 01:30:33.167419] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:24.718 [2024-11-17 01:30:33.167529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.978 [2024-11-17 01:30:33.341016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.238 [2024-11-17 01:30:33.455836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.238 [2024-11-17 01:30:33.660985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.238 [2024-11-17 01:30:33.661034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.807 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.807 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:25.807 01:30:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:25.807 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.807 01:30:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.807 [2024-11-17 01:30:34.003100] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.807 [2024-11-17 01:30:34.003150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.807 [2024-11-17 01:30:34.003160] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.807 [2024-11-17 01:30:34.003170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.807 [2024-11-17 01:30:34.003176] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.807 [2024-11-17 01:30:34.003185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.807 "name": "Existed_Raid", 00:10:25.807 "uuid": "01507144-7f88-4bf3-8d0c-2b605868450f", 00:10:25.807 "strip_size_kb": 0, 00:10:25.807 "state": "configuring", 00:10:25.807 "raid_level": "raid1", 00:10:25.807 "superblock": true, 00:10:25.807 "num_base_bdevs": 3, 00:10:25.807 "num_base_bdevs_discovered": 0, 00:10:25.807 "num_base_bdevs_operational": 3, 00:10:25.807 "base_bdevs_list": [ 00:10:25.807 { 00:10:25.807 "name": "BaseBdev1", 00:10:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.807 "is_configured": false, 00:10:25.807 "data_offset": 0, 00:10:25.807 "data_size": 0 00:10:25.807 }, 00:10:25.807 { 00:10:25.807 "name": "BaseBdev2", 00:10:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.807 "is_configured": false, 00:10:25.807 "data_offset": 0, 00:10:25.807 "data_size": 0 00:10:25.807 }, 00:10:25.807 { 00:10:25.807 "name": "BaseBdev3", 00:10:25.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.807 "is_configured": false, 00:10:25.807 "data_offset": 0, 00:10:25.807 "data_size": 0 00:10:25.807 } 00:10:25.807 ] 00:10:25.807 }' 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.807 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 [2024-11-17 01:30:34.434310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.066 [2024-11-17 01:30:34.434351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 [2024-11-17 01:30:34.446286] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.066 [2024-11-17 01:30:34.446329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.066 [2024-11-17 01:30:34.446337] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.066 [2024-11-17 01:30:34.446346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.066 [2024-11-17 01:30:34.446352] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.066 [2024-11-17 01:30:34.446360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 [2024-11-17 01:30:34.494129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.066 BaseBdev1 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.066 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 [ 00:10:26.066 { 00:10:26.066 "name": "BaseBdev1", 00:10:26.066 "aliases": [ 00:10:26.066 "eab6c240-d93e-4f95-98c3-7a9496b67efc" 00:10:26.066 ], 00:10:26.066 "product_name": "Malloc disk", 00:10:26.066 "block_size": 512, 00:10:26.066 "num_blocks": 65536, 00:10:26.066 "uuid": "eab6c240-d93e-4f95-98c3-7a9496b67efc", 00:10:26.066 "assigned_rate_limits": { 00:10:26.066 "rw_ios_per_sec": 0, 00:10:26.066 "rw_mbytes_per_sec": 0, 00:10:26.066 "r_mbytes_per_sec": 0, 00:10:26.066 "w_mbytes_per_sec": 0 00:10:26.066 }, 00:10:26.066 "claimed": true, 00:10:26.067 "claim_type": "exclusive_write", 00:10:26.067 "zoned": false, 00:10:26.067 "supported_io_types": { 00:10:26.067 "read": true, 00:10:26.067 "write": true, 00:10:26.067 "unmap": true, 00:10:26.067 "flush": true, 00:10:26.067 "reset": true, 00:10:26.067 "nvme_admin": false, 00:10:26.067 "nvme_io": false, 00:10:26.326 "nvme_io_md": false, 00:10:26.326 "write_zeroes": true, 00:10:26.326 "zcopy": true, 00:10:26.326 "get_zone_info": false, 00:10:26.326 "zone_management": false, 00:10:26.326 "zone_append": false, 00:10:26.326 "compare": false, 00:10:26.326 "compare_and_write": false, 00:10:26.326 "abort": true, 00:10:26.326 "seek_hole": false, 00:10:26.326 "seek_data": false, 00:10:26.326 "copy": true, 00:10:26.326 "nvme_iov_md": false 00:10:26.326 }, 00:10:26.326 "memory_domains": [ 00:10:26.326 { 00:10:26.326 "dma_device_id": "system", 00:10:26.326 "dma_device_type": 1 00:10:26.326 }, 00:10:26.326 { 00:10:26.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.326 "dma_device_type": 2 00:10:26.326 } 00:10:26.326 ], 00:10:26.326 "driver_specific": {} 00:10:26.326 } 00:10:26.326 ] 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.326 "name": "Existed_Raid", 00:10:26.326 "uuid": "5a92a4e4-8b7e-4053-916f-af1d2422016e", 00:10:26.326 "strip_size_kb": 0, 00:10:26.326 "state": "configuring", 00:10:26.326 "raid_level": "raid1", 00:10:26.326 "superblock": true, 00:10:26.326 "num_base_bdevs": 3, 00:10:26.326 "num_base_bdevs_discovered": 1, 00:10:26.326 "num_base_bdevs_operational": 3, 00:10:26.326 "base_bdevs_list": [ 00:10:26.326 { 00:10:26.326 "name": "BaseBdev1", 00:10:26.326 "uuid": "eab6c240-d93e-4f95-98c3-7a9496b67efc", 00:10:26.326 "is_configured": true, 00:10:26.326 "data_offset": 2048, 00:10:26.326 "data_size": 63488 00:10:26.326 }, 00:10:26.326 { 00:10:26.326 "name": "BaseBdev2", 00:10:26.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.326 "is_configured": false, 00:10:26.326 "data_offset": 0, 00:10:26.326 "data_size": 0 00:10:26.326 }, 00:10:26.326 { 00:10:26.326 "name": "BaseBdev3", 00:10:26.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.326 "is_configured": false, 00:10:26.326 "data_offset": 0, 00:10:26.326 "data_size": 0 00:10:26.326 } 00:10:26.326 ] 00:10:26.326 }' 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.326 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.587 [2024-11-17 01:30:34.909499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.587 [2024-11-17 01:30:34.909556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.587 [2024-11-17 01:30:34.917542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.587 [2024-11-17 01:30:34.919618] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.587 [2024-11-17 01:30:34.919667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.587 [2024-11-17 01:30:34.919678] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.587 [2024-11-17 01:30:34.919688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.587 "name": "Existed_Raid", 00:10:26.587 "uuid": "71f0eceb-cef5-486d-8b43-5f4d0c1d2c79", 00:10:26.587 "strip_size_kb": 0, 00:10:26.587 "state": "configuring", 00:10:26.587 "raid_level": "raid1", 00:10:26.587 "superblock": true, 00:10:26.587 "num_base_bdevs": 3, 00:10:26.587 "num_base_bdevs_discovered": 1, 00:10:26.587 "num_base_bdevs_operational": 3, 00:10:26.587 "base_bdevs_list": [ 00:10:26.587 { 00:10:26.587 "name": "BaseBdev1", 00:10:26.587 "uuid": "eab6c240-d93e-4f95-98c3-7a9496b67efc", 00:10:26.587 "is_configured": true, 00:10:26.587 "data_offset": 2048, 00:10:26.587 "data_size": 63488 00:10:26.587 }, 00:10:26.587 { 00:10:26.587 "name": "BaseBdev2", 00:10:26.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.587 "is_configured": false, 00:10:26.587 "data_offset": 0, 00:10:26.587 "data_size": 0 00:10:26.587 }, 00:10:26.587 { 00:10:26.587 "name": "BaseBdev3", 00:10:26.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.587 "is_configured": false, 00:10:26.587 "data_offset": 0, 00:10:26.587 "data_size": 0 00:10:26.587 } 00:10:26.587 ] 00:10:26.587 }' 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.587 01:30:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 [2024-11-17 01:30:35.439398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.156 BaseBdev2 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.156 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.157 [ 00:10:27.157 { 00:10:27.157 "name": "BaseBdev2", 00:10:27.157 "aliases": [ 00:10:27.157 "c44f50c3-382c-4dbb-aea6-8c893ccda4d2" 00:10:27.157 ], 00:10:27.157 "product_name": "Malloc disk", 00:10:27.157 "block_size": 512, 00:10:27.157 "num_blocks": 65536, 00:10:27.157 "uuid": "c44f50c3-382c-4dbb-aea6-8c893ccda4d2", 00:10:27.157 "assigned_rate_limits": { 00:10:27.157 "rw_ios_per_sec": 0, 00:10:27.157 "rw_mbytes_per_sec": 0, 00:10:27.157 "r_mbytes_per_sec": 0, 00:10:27.157 "w_mbytes_per_sec": 0 00:10:27.157 }, 00:10:27.157 "claimed": true, 00:10:27.157 "claim_type": "exclusive_write", 00:10:27.157 "zoned": false, 00:10:27.157 "supported_io_types": { 00:10:27.157 "read": true, 00:10:27.157 "write": true, 00:10:27.157 "unmap": true, 00:10:27.157 "flush": true, 00:10:27.157 "reset": true, 00:10:27.157 "nvme_admin": false, 00:10:27.157 "nvme_io": false, 00:10:27.157 "nvme_io_md": false, 00:10:27.157 "write_zeroes": true, 00:10:27.157 "zcopy": true, 00:10:27.157 "get_zone_info": false, 00:10:27.157 "zone_management": false, 00:10:27.157 "zone_append": false, 00:10:27.157 "compare": false, 00:10:27.157 "compare_and_write": false, 00:10:27.157 "abort": true, 00:10:27.157 "seek_hole": false, 00:10:27.157 "seek_data": false, 00:10:27.157 "copy": true, 00:10:27.157 "nvme_iov_md": false 00:10:27.157 }, 00:10:27.157 "memory_domains": [ 00:10:27.157 { 00:10:27.157 "dma_device_id": "system", 00:10:27.157 "dma_device_type": 1 00:10:27.157 }, 00:10:27.157 { 00:10:27.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.157 "dma_device_type": 2 00:10:27.157 } 00:10:27.157 ], 00:10:27.157 "driver_specific": {} 00:10:27.157 } 00:10:27.157 ] 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.157 "name": "Existed_Raid", 00:10:27.157 "uuid": "71f0eceb-cef5-486d-8b43-5f4d0c1d2c79", 00:10:27.157 "strip_size_kb": 0, 00:10:27.157 "state": "configuring", 00:10:27.157 "raid_level": "raid1", 00:10:27.157 "superblock": true, 00:10:27.157 "num_base_bdevs": 3, 00:10:27.157 "num_base_bdevs_discovered": 2, 00:10:27.157 "num_base_bdevs_operational": 3, 00:10:27.157 "base_bdevs_list": [ 00:10:27.157 { 00:10:27.157 "name": "BaseBdev1", 00:10:27.157 "uuid": "eab6c240-d93e-4f95-98c3-7a9496b67efc", 00:10:27.157 "is_configured": true, 00:10:27.157 "data_offset": 2048, 00:10:27.157 "data_size": 63488 00:10:27.157 }, 00:10:27.157 { 00:10:27.157 "name": "BaseBdev2", 00:10:27.157 "uuid": "c44f50c3-382c-4dbb-aea6-8c893ccda4d2", 00:10:27.157 "is_configured": true, 00:10:27.157 "data_offset": 2048, 00:10:27.157 "data_size": 63488 00:10:27.157 }, 00:10:27.157 { 00:10:27.157 "name": "BaseBdev3", 00:10:27.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.157 "is_configured": false, 00:10:27.157 "data_offset": 0, 00:10:27.157 "data_size": 0 00:10:27.157 } 00:10:27.157 ] 00:10:27.157 }' 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.157 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.417 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.417 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.417 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.677 [2024-11-17 01:30:35.907933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.677 [2024-11-17 01:30:35.908202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.677 [2024-11-17 01:30:35.908225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.677 [2024-11-17 01:30:35.908498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:27.677 [2024-11-17 01:30:35.908654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.677 [2024-11-17 01:30:35.908669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:27.677 [2024-11-17 01:30:35.908829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.677 BaseBdev3 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.677 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.677 [ 00:10:27.677 { 00:10:27.677 "name": "BaseBdev3", 00:10:27.677 "aliases": [ 00:10:27.677 "4c2b0292-7d54-430b-828e-d1abb160a319" 00:10:27.677 ], 00:10:27.677 "product_name": "Malloc disk", 00:10:27.677 "block_size": 512, 00:10:27.677 "num_blocks": 65536, 00:10:27.677 "uuid": "4c2b0292-7d54-430b-828e-d1abb160a319", 00:10:27.677 "assigned_rate_limits": { 00:10:27.677 "rw_ios_per_sec": 0, 00:10:27.677 "rw_mbytes_per_sec": 0, 00:10:27.677 "r_mbytes_per_sec": 0, 00:10:27.677 "w_mbytes_per_sec": 0 00:10:27.677 }, 00:10:27.678 "claimed": true, 00:10:27.678 "claim_type": "exclusive_write", 00:10:27.678 "zoned": false, 00:10:27.678 "supported_io_types": { 00:10:27.678 "read": true, 00:10:27.678 "write": true, 00:10:27.678 "unmap": true, 00:10:27.678 "flush": true, 00:10:27.678 "reset": true, 00:10:27.678 "nvme_admin": false, 00:10:27.678 "nvme_io": false, 00:10:27.678 "nvme_io_md": false, 00:10:27.678 "write_zeroes": true, 00:10:27.678 "zcopy": true, 00:10:27.678 "get_zone_info": false, 00:10:27.678 "zone_management": false, 00:10:27.678 "zone_append": false, 00:10:27.678 "compare": false, 00:10:27.678 "compare_and_write": false, 00:10:27.678 "abort": true, 00:10:27.678 "seek_hole": false, 00:10:27.678 "seek_data": false, 00:10:27.678 "copy": true, 00:10:27.678 "nvme_iov_md": false 00:10:27.678 }, 00:10:27.678 "memory_domains": [ 00:10:27.678 { 00:10:27.678 "dma_device_id": "system", 00:10:27.678 "dma_device_type": 1 00:10:27.678 }, 00:10:27.678 { 00:10:27.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.678 "dma_device_type": 2 00:10:27.678 } 00:10:27.678 ], 00:10:27.678 "driver_specific": {} 00:10:27.678 } 00:10:27.678 ] 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.678 "name": "Existed_Raid", 00:10:27.678 "uuid": "71f0eceb-cef5-486d-8b43-5f4d0c1d2c79", 00:10:27.678 "strip_size_kb": 0, 00:10:27.678 "state": "online", 00:10:27.678 "raid_level": "raid1", 00:10:27.678 "superblock": true, 00:10:27.678 "num_base_bdevs": 3, 00:10:27.678 "num_base_bdevs_discovered": 3, 00:10:27.678 "num_base_bdevs_operational": 3, 00:10:27.678 "base_bdevs_list": [ 00:10:27.678 { 00:10:27.678 "name": "BaseBdev1", 00:10:27.678 "uuid": "eab6c240-d93e-4f95-98c3-7a9496b67efc", 00:10:27.678 "is_configured": true, 00:10:27.678 "data_offset": 2048, 00:10:27.678 "data_size": 63488 00:10:27.678 }, 00:10:27.678 { 00:10:27.678 "name": "BaseBdev2", 00:10:27.678 "uuid": "c44f50c3-382c-4dbb-aea6-8c893ccda4d2", 00:10:27.678 "is_configured": true, 00:10:27.678 "data_offset": 2048, 00:10:27.678 "data_size": 63488 00:10:27.678 }, 00:10:27.678 { 00:10:27.678 "name": "BaseBdev3", 00:10:27.678 "uuid": "4c2b0292-7d54-430b-828e-d1abb160a319", 00:10:27.678 "is_configured": true, 00:10:27.678 "data_offset": 2048, 00:10:27.678 "data_size": 63488 00:10:27.678 } 00:10:27.678 ] 00:10:27.678 }' 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.678 01:30:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.937 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.937 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.937 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.937 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.937 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.937 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.197 [2024-11-17 01:30:36.399523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.197 "name": "Existed_Raid", 00:10:28.197 "aliases": [ 00:10:28.197 "71f0eceb-cef5-486d-8b43-5f4d0c1d2c79" 00:10:28.197 ], 00:10:28.197 "product_name": "Raid Volume", 00:10:28.197 "block_size": 512, 00:10:28.197 "num_blocks": 63488, 00:10:28.197 "uuid": "71f0eceb-cef5-486d-8b43-5f4d0c1d2c79", 00:10:28.197 "assigned_rate_limits": { 00:10:28.197 "rw_ios_per_sec": 0, 00:10:28.197 "rw_mbytes_per_sec": 0, 00:10:28.197 "r_mbytes_per_sec": 0, 00:10:28.197 "w_mbytes_per_sec": 0 00:10:28.197 }, 00:10:28.197 "claimed": false, 00:10:28.197 "zoned": false, 00:10:28.197 "supported_io_types": { 00:10:28.197 "read": true, 00:10:28.197 "write": true, 00:10:28.197 "unmap": false, 00:10:28.197 "flush": false, 00:10:28.197 "reset": true, 00:10:28.197 "nvme_admin": false, 00:10:28.197 "nvme_io": false, 00:10:28.197 "nvme_io_md": false, 00:10:28.197 "write_zeroes": true, 00:10:28.197 "zcopy": false, 00:10:28.197 "get_zone_info": false, 00:10:28.197 "zone_management": false, 00:10:28.197 "zone_append": false, 00:10:28.197 "compare": false, 00:10:28.197 "compare_and_write": false, 00:10:28.197 "abort": false, 00:10:28.197 "seek_hole": false, 00:10:28.197 "seek_data": false, 00:10:28.197 "copy": false, 00:10:28.197 "nvme_iov_md": false 00:10:28.197 }, 00:10:28.197 "memory_domains": [ 00:10:28.197 { 00:10:28.197 "dma_device_id": "system", 00:10:28.197 "dma_device_type": 1 00:10:28.197 }, 00:10:28.197 { 00:10:28.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.197 "dma_device_type": 2 00:10:28.197 }, 00:10:28.197 { 00:10:28.197 "dma_device_id": "system", 00:10:28.197 "dma_device_type": 1 00:10:28.197 }, 00:10:28.197 { 00:10:28.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.197 "dma_device_type": 2 00:10:28.197 }, 00:10:28.197 { 00:10:28.197 "dma_device_id": "system", 00:10:28.197 "dma_device_type": 1 00:10:28.197 }, 00:10:28.197 { 00:10:28.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.197 "dma_device_type": 2 00:10:28.197 } 00:10:28.197 ], 00:10:28.197 "driver_specific": { 00:10:28.197 "raid": { 00:10:28.197 "uuid": "71f0eceb-cef5-486d-8b43-5f4d0c1d2c79", 00:10:28.197 "strip_size_kb": 0, 00:10:28.197 "state": "online", 00:10:28.197 "raid_level": "raid1", 00:10:28.197 "superblock": true, 00:10:28.197 "num_base_bdevs": 3, 00:10:28.197 "num_base_bdevs_discovered": 3, 00:10:28.197 "num_base_bdevs_operational": 3, 00:10:28.197 "base_bdevs_list": [ 00:10:28.197 { 00:10:28.197 "name": "BaseBdev1", 00:10:28.197 "uuid": "eab6c240-d93e-4f95-98c3-7a9496b67efc", 00:10:28.197 "is_configured": true, 00:10:28.197 "data_offset": 2048, 00:10:28.197 "data_size": 63488 00:10:28.197 }, 00:10:28.197 { 00:10:28.197 "name": "BaseBdev2", 00:10:28.197 "uuid": "c44f50c3-382c-4dbb-aea6-8c893ccda4d2", 00:10:28.197 "is_configured": true, 00:10:28.197 "data_offset": 2048, 00:10:28.197 "data_size": 63488 00:10:28.197 }, 00:10:28.197 { 00:10:28.197 "name": "BaseBdev3", 00:10:28.197 "uuid": "4c2b0292-7d54-430b-828e-d1abb160a319", 00:10:28.197 "is_configured": true, 00:10:28.197 "data_offset": 2048, 00:10:28.197 "data_size": 63488 00:10:28.197 } 00:10:28.197 ] 00:10:28.197 } 00:10:28.197 } 00:10:28.197 }' 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:28.197 BaseBdev2 00:10:28.197 BaseBdev3' 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.197 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.198 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.458 [2024-11-17 01:30:36.694827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.458 "name": "Existed_Raid", 00:10:28.458 "uuid": "71f0eceb-cef5-486d-8b43-5f4d0c1d2c79", 00:10:28.458 "strip_size_kb": 0, 00:10:28.458 "state": "online", 00:10:28.458 "raid_level": "raid1", 00:10:28.458 "superblock": true, 00:10:28.458 "num_base_bdevs": 3, 00:10:28.458 "num_base_bdevs_discovered": 2, 00:10:28.458 "num_base_bdevs_operational": 2, 00:10:28.458 "base_bdevs_list": [ 00:10:28.458 { 00:10:28.458 "name": null, 00:10:28.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.458 "is_configured": false, 00:10:28.458 "data_offset": 0, 00:10:28.458 "data_size": 63488 00:10:28.458 }, 00:10:28.458 { 00:10:28.458 "name": "BaseBdev2", 00:10:28.458 "uuid": "c44f50c3-382c-4dbb-aea6-8c893ccda4d2", 00:10:28.458 "is_configured": true, 00:10:28.458 "data_offset": 2048, 00:10:28.458 "data_size": 63488 00:10:28.458 }, 00:10:28.458 { 00:10:28.458 "name": "BaseBdev3", 00:10:28.458 "uuid": "4c2b0292-7d54-430b-828e-d1abb160a319", 00:10:28.458 "is_configured": true, 00:10:28.458 "data_offset": 2048, 00:10:28.458 "data_size": 63488 00:10:28.458 } 00:10:28.458 ] 00:10:28.458 }' 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.458 01:30:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.718 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.718 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.718 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.718 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.977 [2024-11-17 01:30:37.221133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.977 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.978 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.978 [2024-11-17 01:30:37.361138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.978 [2024-11-17 01:30:37.361243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.237 [2024-11-17 01:30:37.453658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.237 [2024-11-17 01:30:37.453714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.237 [2024-11-17 01:30:37.453741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.237 BaseBdev2 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.237 [ 00:10:29.237 { 00:10:29.237 "name": "BaseBdev2", 00:10:29.237 "aliases": [ 00:10:29.237 "f8118f9b-48cb-4520-95a9-82c93cd58a98" 00:10:29.237 ], 00:10:29.237 "product_name": "Malloc disk", 00:10:29.237 "block_size": 512, 00:10:29.237 "num_blocks": 65536, 00:10:29.237 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:29.237 "assigned_rate_limits": { 00:10:29.237 "rw_ios_per_sec": 0, 00:10:29.237 "rw_mbytes_per_sec": 0, 00:10:29.237 "r_mbytes_per_sec": 0, 00:10:29.237 "w_mbytes_per_sec": 0 00:10:29.237 }, 00:10:29.237 "claimed": false, 00:10:29.237 "zoned": false, 00:10:29.237 "supported_io_types": { 00:10:29.237 "read": true, 00:10:29.237 "write": true, 00:10:29.237 "unmap": true, 00:10:29.237 "flush": true, 00:10:29.237 "reset": true, 00:10:29.237 "nvme_admin": false, 00:10:29.237 "nvme_io": false, 00:10:29.237 "nvme_io_md": false, 00:10:29.237 "write_zeroes": true, 00:10:29.237 "zcopy": true, 00:10:29.237 "get_zone_info": false, 00:10:29.237 "zone_management": false, 00:10:29.237 "zone_append": false, 00:10:29.237 "compare": false, 00:10:29.237 "compare_and_write": false, 00:10:29.237 "abort": true, 00:10:29.237 "seek_hole": false, 00:10:29.237 "seek_data": false, 00:10:29.237 "copy": true, 00:10:29.237 "nvme_iov_md": false 00:10:29.237 }, 00:10:29.237 "memory_domains": [ 00:10:29.237 { 00:10:29.237 "dma_device_id": "system", 00:10:29.237 "dma_device_type": 1 00:10:29.237 }, 00:10:29.237 { 00:10:29.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.237 "dma_device_type": 2 00:10:29.237 } 00:10:29.237 ], 00:10:29.237 "driver_specific": {} 00:10:29.237 } 00:10:29.237 ] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.237 BaseBdev3 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.237 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.237 [ 00:10:29.237 { 00:10:29.237 "name": "BaseBdev3", 00:10:29.237 "aliases": [ 00:10:29.237 "25d806c3-cbd9-49a3-aecb-275bf1a72998" 00:10:29.237 ], 00:10:29.237 "product_name": "Malloc disk", 00:10:29.237 "block_size": 512, 00:10:29.237 "num_blocks": 65536, 00:10:29.237 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:29.237 "assigned_rate_limits": { 00:10:29.237 "rw_ios_per_sec": 0, 00:10:29.237 "rw_mbytes_per_sec": 0, 00:10:29.237 "r_mbytes_per_sec": 0, 00:10:29.238 "w_mbytes_per_sec": 0 00:10:29.238 }, 00:10:29.238 "claimed": false, 00:10:29.238 "zoned": false, 00:10:29.238 "supported_io_types": { 00:10:29.238 "read": true, 00:10:29.238 "write": true, 00:10:29.238 "unmap": true, 00:10:29.238 "flush": true, 00:10:29.238 "reset": true, 00:10:29.238 "nvme_admin": false, 00:10:29.238 "nvme_io": false, 00:10:29.238 "nvme_io_md": false, 00:10:29.238 "write_zeroes": true, 00:10:29.238 "zcopy": true, 00:10:29.238 "get_zone_info": false, 00:10:29.238 "zone_management": false, 00:10:29.238 "zone_append": false, 00:10:29.238 "compare": false, 00:10:29.238 "compare_and_write": false, 00:10:29.238 "abort": true, 00:10:29.238 "seek_hole": false, 00:10:29.238 "seek_data": false, 00:10:29.238 "copy": true, 00:10:29.238 "nvme_iov_md": false 00:10:29.238 }, 00:10:29.238 "memory_domains": [ 00:10:29.238 { 00:10:29.238 "dma_device_id": "system", 00:10:29.238 "dma_device_type": 1 00:10:29.238 }, 00:10:29.238 { 00:10:29.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.238 "dma_device_type": 2 00:10:29.238 } 00:10:29.238 ], 00:10:29.238 "driver_specific": {} 00:10:29.238 } 00:10:29.238 ] 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.238 [2024-11-17 01:30:37.670315] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.238 [2024-11-17 01:30:37.670367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.238 [2024-11-17 01:30:37.670387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.238 [2024-11-17 01:30:37.672369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.238 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.496 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.496 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.496 "name": "Existed_Raid", 00:10:29.496 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:29.496 "strip_size_kb": 0, 00:10:29.496 "state": "configuring", 00:10:29.496 "raid_level": "raid1", 00:10:29.496 "superblock": true, 00:10:29.496 "num_base_bdevs": 3, 00:10:29.497 "num_base_bdevs_discovered": 2, 00:10:29.497 "num_base_bdevs_operational": 3, 00:10:29.497 "base_bdevs_list": [ 00:10:29.497 { 00:10:29.497 "name": "BaseBdev1", 00:10:29.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.497 "is_configured": false, 00:10:29.497 "data_offset": 0, 00:10:29.497 "data_size": 0 00:10:29.497 }, 00:10:29.497 { 00:10:29.497 "name": "BaseBdev2", 00:10:29.497 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:29.497 "is_configured": true, 00:10:29.497 "data_offset": 2048, 00:10:29.497 "data_size": 63488 00:10:29.497 }, 00:10:29.497 { 00:10:29.497 "name": "BaseBdev3", 00:10:29.497 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:29.497 "is_configured": true, 00:10:29.497 "data_offset": 2048, 00:10:29.497 "data_size": 63488 00:10:29.497 } 00:10:29.497 ] 00:10:29.497 }' 00:10:29.497 01:30:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.497 01:30:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.756 [2024-11-17 01:30:38.149530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.756 "name": "Existed_Raid", 00:10:29.756 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:29.756 "strip_size_kb": 0, 00:10:29.756 "state": "configuring", 00:10:29.756 "raid_level": "raid1", 00:10:29.756 "superblock": true, 00:10:29.756 "num_base_bdevs": 3, 00:10:29.756 "num_base_bdevs_discovered": 1, 00:10:29.756 "num_base_bdevs_operational": 3, 00:10:29.756 "base_bdevs_list": [ 00:10:29.756 { 00:10:29.756 "name": "BaseBdev1", 00:10:29.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.756 "is_configured": false, 00:10:29.756 "data_offset": 0, 00:10:29.756 "data_size": 0 00:10:29.756 }, 00:10:29.756 { 00:10:29.756 "name": null, 00:10:29.756 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:29.756 "is_configured": false, 00:10:29.756 "data_offset": 0, 00:10:29.756 "data_size": 63488 00:10:29.756 }, 00:10:29.756 { 00:10:29.756 "name": "BaseBdev3", 00:10:29.756 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:29.756 "is_configured": true, 00:10:29.756 "data_offset": 2048, 00:10:29.756 "data_size": 63488 00:10:29.756 } 00:10:29.756 ] 00:10:29.756 }' 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.756 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.325 [2024-11-17 01:30:38.640358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.325 BaseBdev1 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.325 [ 00:10:30.325 { 00:10:30.325 "name": "BaseBdev1", 00:10:30.325 "aliases": [ 00:10:30.325 "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e" 00:10:30.325 ], 00:10:30.325 "product_name": "Malloc disk", 00:10:30.325 "block_size": 512, 00:10:30.325 "num_blocks": 65536, 00:10:30.325 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:30.325 "assigned_rate_limits": { 00:10:30.325 "rw_ios_per_sec": 0, 00:10:30.325 "rw_mbytes_per_sec": 0, 00:10:30.325 "r_mbytes_per_sec": 0, 00:10:30.325 "w_mbytes_per_sec": 0 00:10:30.325 }, 00:10:30.325 "claimed": true, 00:10:30.325 "claim_type": "exclusive_write", 00:10:30.325 "zoned": false, 00:10:30.325 "supported_io_types": { 00:10:30.325 "read": true, 00:10:30.325 "write": true, 00:10:30.325 "unmap": true, 00:10:30.325 "flush": true, 00:10:30.325 "reset": true, 00:10:30.325 "nvme_admin": false, 00:10:30.325 "nvme_io": false, 00:10:30.325 "nvme_io_md": false, 00:10:30.325 "write_zeroes": true, 00:10:30.325 "zcopy": true, 00:10:30.325 "get_zone_info": false, 00:10:30.325 "zone_management": false, 00:10:30.325 "zone_append": false, 00:10:30.325 "compare": false, 00:10:30.325 "compare_and_write": false, 00:10:30.325 "abort": true, 00:10:30.325 "seek_hole": false, 00:10:30.325 "seek_data": false, 00:10:30.325 "copy": true, 00:10:30.325 "nvme_iov_md": false 00:10:30.325 }, 00:10:30.325 "memory_domains": [ 00:10:30.325 { 00:10:30.325 "dma_device_id": "system", 00:10:30.325 "dma_device_type": 1 00:10:30.325 }, 00:10:30.325 { 00:10:30.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.325 "dma_device_type": 2 00:10:30.325 } 00:10:30.325 ], 00:10:30.325 "driver_specific": {} 00:10:30.325 } 00:10:30.325 ] 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.325 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.326 "name": "Existed_Raid", 00:10:30.326 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:30.326 "strip_size_kb": 0, 00:10:30.326 "state": "configuring", 00:10:30.326 "raid_level": "raid1", 00:10:30.326 "superblock": true, 00:10:30.326 "num_base_bdevs": 3, 00:10:30.326 "num_base_bdevs_discovered": 2, 00:10:30.326 "num_base_bdevs_operational": 3, 00:10:30.326 "base_bdevs_list": [ 00:10:30.326 { 00:10:30.326 "name": "BaseBdev1", 00:10:30.326 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:30.326 "is_configured": true, 00:10:30.326 "data_offset": 2048, 00:10:30.326 "data_size": 63488 00:10:30.326 }, 00:10:30.326 { 00:10:30.326 "name": null, 00:10:30.326 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:30.326 "is_configured": false, 00:10:30.326 "data_offset": 0, 00:10:30.326 "data_size": 63488 00:10:30.326 }, 00:10:30.326 { 00:10:30.326 "name": "BaseBdev3", 00:10:30.326 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:30.326 "is_configured": true, 00:10:30.326 "data_offset": 2048, 00:10:30.326 "data_size": 63488 00:10:30.326 } 00:10:30.326 ] 00:10:30.326 }' 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.326 01:30:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.893 [2024-11-17 01:30:39.123611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.893 "name": "Existed_Raid", 00:10:30.893 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:30.893 "strip_size_kb": 0, 00:10:30.893 "state": "configuring", 00:10:30.893 "raid_level": "raid1", 00:10:30.893 "superblock": true, 00:10:30.893 "num_base_bdevs": 3, 00:10:30.893 "num_base_bdevs_discovered": 1, 00:10:30.893 "num_base_bdevs_operational": 3, 00:10:30.893 "base_bdevs_list": [ 00:10:30.893 { 00:10:30.893 "name": "BaseBdev1", 00:10:30.893 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:30.893 "is_configured": true, 00:10:30.893 "data_offset": 2048, 00:10:30.893 "data_size": 63488 00:10:30.893 }, 00:10:30.893 { 00:10:30.893 "name": null, 00:10:30.893 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:30.893 "is_configured": false, 00:10:30.893 "data_offset": 0, 00:10:30.893 "data_size": 63488 00:10:30.893 }, 00:10:30.893 { 00:10:30.893 "name": null, 00:10:30.893 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:30.893 "is_configured": false, 00:10:30.893 "data_offset": 0, 00:10:30.893 "data_size": 63488 00:10:30.893 } 00:10:30.893 ] 00:10:30.893 }' 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.893 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.153 [2024-11-17 01:30:39.562893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.153 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.413 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.413 "name": "Existed_Raid", 00:10:31.413 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:31.413 "strip_size_kb": 0, 00:10:31.413 "state": "configuring", 00:10:31.413 "raid_level": "raid1", 00:10:31.413 "superblock": true, 00:10:31.413 "num_base_bdevs": 3, 00:10:31.413 "num_base_bdevs_discovered": 2, 00:10:31.413 "num_base_bdevs_operational": 3, 00:10:31.413 "base_bdevs_list": [ 00:10:31.413 { 00:10:31.413 "name": "BaseBdev1", 00:10:31.413 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:31.413 "is_configured": true, 00:10:31.413 "data_offset": 2048, 00:10:31.413 "data_size": 63488 00:10:31.413 }, 00:10:31.413 { 00:10:31.413 "name": null, 00:10:31.413 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:31.413 "is_configured": false, 00:10:31.413 "data_offset": 0, 00:10:31.413 "data_size": 63488 00:10:31.413 }, 00:10:31.413 { 00:10:31.413 "name": "BaseBdev3", 00:10:31.413 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:31.413 "is_configured": true, 00:10:31.413 "data_offset": 2048, 00:10:31.413 "data_size": 63488 00:10:31.413 } 00:10:31.413 ] 00:10:31.413 }' 00:10:31.413 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.413 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.673 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.673 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.673 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.673 01:30:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.673 01:30:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.673 [2024-11-17 01:30:40.026080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.673 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.933 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.933 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.933 "name": "Existed_Raid", 00:10:31.933 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:31.933 "strip_size_kb": 0, 00:10:31.933 "state": "configuring", 00:10:31.933 "raid_level": "raid1", 00:10:31.933 "superblock": true, 00:10:31.933 "num_base_bdevs": 3, 00:10:31.933 "num_base_bdevs_discovered": 1, 00:10:31.933 "num_base_bdevs_operational": 3, 00:10:31.933 "base_bdevs_list": [ 00:10:31.933 { 00:10:31.933 "name": null, 00:10:31.933 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:31.933 "is_configured": false, 00:10:31.933 "data_offset": 0, 00:10:31.933 "data_size": 63488 00:10:31.933 }, 00:10:31.933 { 00:10:31.933 "name": null, 00:10:31.933 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:31.933 "is_configured": false, 00:10:31.933 "data_offset": 0, 00:10:31.933 "data_size": 63488 00:10:31.933 }, 00:10:31.933 { 00:10:31.933 "name": "BaseBdev3", 00:10:31.933 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:31.933 "is_configured": true, 00:10:31.933 "data_offset": 2048, 00:10:31.933 "data_size": 63488 00:10:31.933 } 00:10:31.933 ] 00:10:31.933 }' 00:10:31.933 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.933 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.193 [2024-11-17 01:30:40.601531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.193 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.193 "name": "Existed_Raid", 00:10:32.193 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:32.193 "strip_size_kb": 0, 00:10:32.193 "state": "configuring", 00:10:32.193 "raid_level": "raid1", 00:10:32.193 "superblock": true, 00:10:32.193 "num_base_bdevs": 3, 00:10:32.193 "num_base_bdevs_discovered": 2, 00:10:32.193 "num_base_bdevs_operational": 3, 00:10:32.193 "base_bdevs_list": [ 00:10:32.193 { 00:10:32.193 "name": null, 00:10:32.193 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:32.193 "is_configured": false, 00:10:32.193 "data_offset": 0, 00:10:32.193 "data_size": 63488 00:10:32.194 }, 00:10:32.194 { 00:10:32.194 "name": "BaseBdev2", 00:10:32.194 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:32.194 "is_configured": true, 00:10:32.194 "data_offset": 2048, 00:10:32.194 "data_size": 63488 00:10:32.194 }, 00:10:32.194 { 00:10:32.194 "name": "BaseBdev3", 00:10:32.194 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:32.194 "is_configured": true, 00:10:32.194 "data_offset": 2048, 00:10:32.194 "data_size": 63488 00:10:32.194 } 00:10:32.194 ] 00:10:32.194 }' 00:10:32.194 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.194 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.764 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.764 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.764 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.764 01:30:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.764 01:30:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 152224a9-9a1c-42cc-a5da-36c3d8b8eb3e 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.764 [2024-11-17 01:30:41.119625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:32.764 [2024-11-17 01:30:41.119907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.764 [2024-11-17 01:30:41.119925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:32.764 [2024-11-17 01:30:41.120162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:32.764 [2024-11-17 01:30:41.120315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.764 [2024-11-17 01:30:41.120327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:32.764 NewBaseBdev 00:10:32.764 [2024-11-17 01:30:41.120455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.764 [ 00:10:32.764 { 00:10:32.764 "name": "NewBaseBdev", 00:10:32.764 "aliases": [ 00:10:32.764 "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e" 00:10:32.764 ], 00:10:32.764 "product_name": "Malloc disk", 00:10:32.764 "block_size": 512, 00:10:32.764 "num_blocks": 65536, 00:10:32.764 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:32.764 "assigned_rate_limits": { 00:10:32.764 "rw_ios_per_sec": 0, 00:10:32.764 "rw_mbytes_per_sec": 0, 00:10:32.764 "r_mbytes_per_sec": 0, 00:10:32.764 "w_mbytes_per_sec": 0 00:10:32.764 }, 00:10:32.764 "claimed": true, 00:10:32.764 "claim_type": "exclusive_write", 00:10:32.764 "zoned": false, 00:10:32.764 "supported_io_types": { 00:10:32.764 "read": true, 00:10:32.764 "write": true, 00:10:32.764 "unmap": true, 00:10:32.764 "flush": true, 00:10:32.764 "reset": true, 00:10:32.764 "nvme_admin": false, 00:10:32.764 "nvme_io": false, 00:10:32.764 "nvme_io_md": false, 00:10:32.764 "write_zeroes": true, 00:10:32.764 "zcopy": true, 00:10:32.764 "get_zone_info": false, 00:10:32.764 "zone_management": false, 00:10:32.764 "zone_append": false, 00:10:32.764 "compare": false, 00:10:32.764 "compare_and_write": false, 00:10:32.764 "abort": true, 00:10:32.764 "seek_hole": false, 00:10:32.764 "seek_data": false, 00:10:32.764 "copy": true, 00:10:32.764 "nvme_iov_md": false 00:10:32.764 }, 00:10:32.764 "memory_domains": [ 00:10:32.764 { 00:10:32.764 "dma_device_id": "system", 00:10:32.764 "dma_device_type": 1 00:10:32.764 }, 00:10:32.764 { 00:10:32.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.764 "dma_device_type": 2 00:10:32.764 } 00:10:32.764 ], 00:10:32.764 "driver_specific": {} 00:10:32.764 } 00:10:32.764 ] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:32.764 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.765 "name": "Existed_Raid", 00:10:32.765 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:32.765 "strip_size_kb": 0, 00:10:32.765 "state": "online", 00:10:32.765 "raid_level": "raid1", 00:10:32.765 "superblock": true, 00:10:32.765 "num_base_bdevs": 3, 00:10:32.765 "num_base_bdevs_discovered": 3, 00:10:32.765 "num_base_bdevs_operational": 3, 00:10:32.765 "base_bdevs_list": [ 00:10:32.765 { 00:10:32.765 "name": "NewBaseBdev", 00:10:32.765 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:32.765 "is_configured": true, 00:10:32.765 "data_offset": 2048, 00:10:32.765 "data_size": 63488 00:10:32.765 }, 00:10:32.765 { 00:10:32.765 "name": "BaseBdev2", 00:10:32.765 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:32.765 "is_configured": true, 00:10:32.765 "data_offset": 2048, 00:10:32.765 "data_size": 63488 00:10:32.765 }, 00:10:32.765 { 00:10:32.765 "name": "BaseBdev3", 00:10:32.765 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:32.765 "is_configured": true, 00:10:32.765 "data_offset": 2048, 00:10:32.765 "data_size": 63488 00:10:32.765 } 00:10:32.765 ] 00:10:32.765 }' 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.765 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.335 [2024-11-17 01:30:41.559157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.335 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.335 "name": "Existed_Raid", 00:10:33.335 "aliases": [ 00:10:33.335 "b31d7fb4-fb70-4695-bc12-922a8a568471" 00:10:33.335 ], 00:10:33.335 "product_name": "Raid Volume", 00:10:33.335 "block_size": 512, 00:10:33.335 "num_blocks": 63488, 00:10:33.335 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:33.335 "assigned_rate_limits": { 00:10:33.335 "rw_ios_per_sec": 0, 00:10:33.335 "rw_mbytes_per_sec": 0, 00:10:33.335 "r_mbytes_per_sec": 0, 00:10:33.335 "w_mbytes_per_sec": 0 00:10:33.335 }, 00:10:33.335 "claimed": false, 00:10:33.335 "zoned": false, 00:10:33.335 "supported_io_types": { 00:10:33.335 "read": true, 00:10:33.335 "write": true, 00:10:33.335 "unmap": false, 00:10:33.335 "flush": false, 00:10:33.335 "reset": true, 00:10:33.335 "nvme_admin": false, 00:10:33.335 "nvme_io": false, 00:10:33.335 "nvme_io_md": false, 00:10:33.335 "write_zeroes": true, 00:10:33.335 "zcopy": false, 00:10:33.335 "get_zone_info": false, 00:10:33.335 "zone_management": false, 00:10:33.335 "zone_append": false, 00:10:33.335 "compare": false, 00:10:33.335 "compare_and_write": false, 00:10:33.335 "abort": false, 00:10:33.335 "seek_hole": false, 00:10:33.335 "seek_data": false, 00:10:33.335 "copy": false, 00:10:33.335 "nvme_iov_md": false 00:10:33.335 }, 00:10:33.335 "memory_domains": [ 00:10:33.336 { 00:10:33.336 "dma_device_id": "system", 00:10:33.336 "dma_device_type": 1 00:10:33.336 }, 00:10:33.336 { 00:10:33.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.336 "dma_device_type": 2 00:10:33.336 }, 00:10:33.336 { 00:10:33.336 "dma_device_id": "system", 00:10:33.336 "dma_device_type": 1 00:10:33.336 }, 00:10:33.336 { 00:10:33.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.336 "dma_device_type": 2 00:10:33.336 }, 00:10:33.336 { 00:10:33.336 "dma_device_id": "system", 00:10:33.336 "dma_device_type": 1 00:10:33.336 }, 00:10:33.336 { 00:10:33.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.336 "dma_device_type": 2 00:10:33.336 } 00:10:33.336 ], 00:10:33.336 "driver_specific": { 00:10:33.336 "raid": { 00:10:33.336 "uuid": "b31d7fb4-fb70-4695-bc12-922a8a568471", 00:10:33.336 "strip_size_kb": 0, 00:10:33.336 "state": "online", 00:10:33.336 "raid_level": "raid1", 00:10:33.336 "superblock": true, 00:10:33.336 "num_base_bdevs": 3, 00:10:33.336 "num_base_bdevs_discovered": 3, 00:10:33.336 "num_base_bdevs_operational": 3, 00:10:33.336 "base_bdevs_list": [ 00:10:33.336 { 00:10:33.336 "name": "NewBaseBdev", 00:10:33.336 "uuid": "152224a9-9a1c-42cc-a5da-36c3d8b8eb3e", 00:10:33.336 "is_configured": true, 00:10:33.336 "data_offset": 2048, 00:10:33.336 "data_size": 63488 00:10:33.336 }, 00:10:33.336 { 00:10:33.336 "name": "BaseBdev2", 00:10:33.336 "uuid": "f8118f9b-48cb-4520-95a9-82c93cd58a98", 00:10:33.336 "is_configured": true, 00:10:33.336 "data_offset": 2048, 00:10:33.336 "data_size": 63488 00:10:33.336 }, 00:10:33.336 { 00:10:33.336 "name": "BaseBdev3", 00:10:33.336 "uuid": "25d806c3-cbd9-49a3-aecb-275bf1a72998", 00:10:33.336 "is_configured": true, 00:10:33.336 "data_offset": 2048, 00:10:33.336 "data_size": 63488 00:10:33.336 } 00:10:33.336 ] 00:10:33.336 } 00:10:33.336 } 00:10:33.336 }' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.336 BaseBdev2 00:10:33.336 BaseBdev3' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.336 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.596 [2024-11-17 01:30:41.818432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.596 [2024-11-17 01:30:41.818459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.596 [2024-11-17 01:30:41.818521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.596 [2024-11-17 01:30:41.818803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.596 [2024-11-17 01:30:41.818814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67825 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67825 ']' 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67825 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67825 00:10:33.596 killing process with pid 67825 00:10:33.596 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.597 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.597 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67825' 00:10:33.597 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67825 00:10:33.597 [2024-11-17 01:30:41.865909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.597 01:30:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67825 00:10:33.856 [2024-11-17 01:30:42.167640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.837 ************************************ 00:10:34.837 END TEST raid_state_function_test_sb 00:10:34.837 ************************************ 00:10:34.837 01:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:34.837 00:10:34.837 real 0m10.137s 00:10:34.837 user 0m16.043s 00:10:34.837 sys 0m1.844s 00:10:34.837 01:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.837 01:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.837 01:30:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:34.837 01:30:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.837 01:30:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.837 01:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.837 ************************************ 00:10:34.837 START TEST raid_superblock_test 00:10:34.837 ************************************ 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68445 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68445 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68445 ']' 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.837 01:30:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.098 [2024-11-17 01:30:43.370131] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:35.098 [2024-11-17 01:30:43.370324] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68445 ] 00:10:35.098 [2024-11-17 01:30:43.544616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.357 [2024-11-17 01:30:43.652382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.617 [2024-11-17 01:30:43.858450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.617 [2024-11-17 01:30:43.858507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.876 malloc1 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.876 [2024-11-17 01:30:44.294948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:35.876 [2024-11-17 01:30:44.295095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.876 [2024-11-17 01:30:44.295164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:35.876 [2024-11-17 01:30:44.295206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.876 [2024-11-17 01:30:44.297693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.876 [2024-11-17 01:30:44.297794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:35.876 pt1 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.876 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.136 malloc2 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.136 [2024-11-17 01:30:44.358240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:36.136 [2024-11-17 01:30:44.358366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.136 [2024-11-17 01:30:44.358414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:36.136 [2024-11-17 01:30:44.358451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.136 [2024-11-17 01:30:44.360861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.136 [2024-11-17 01:30:44.360943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:36.136 pt2 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.136 malloc3 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.136 [2024-11-17 01:30:44.428842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:36.136 [2024-11-17 01:30:44.428963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.136 [2024-11-17 01:30:44.429010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:36.136 [2024-11-17 01:30:44.429047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.136 [2024-11-17 01:30:44.431466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.136 [2024-11-17 01:30:44.431553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:36.136 pt3 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.136 [2024-11-17 01:30:44.440881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:36.136 [2024-11-17 01:30:44.443039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:36.136 [2024-11-17 01:30:44.443171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:36.136 [2024-11-17 01:30:44.443381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:36.136 [2024-11-17 01:30:44.443444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.136 [2024-11-17 01:30:44.443752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:36.136 [2024-11-17 01:30:44.444006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:36.136 [2024-11-17 01:30:44.444060] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:36.136 [2024-11-17 01:30:44.444278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.136 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.136 "name": "raid_bdev1", 00:10:36.136 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:36.136 "strip_size_kb": 0, 00:10:36.136 "state": "online", 00:10:36.136 "raid_level": "raid1", 00:10:36.136 "superblock": true, 00:10:36.136 "num_base_bdevs": 3, 00:10:36.137 "num_base_bdevs_discovered": 3, 00:10:36.137 "num_base_bdevs_operational": 3, 00:10:36.137 "base_bdevs_list": [ 00:10:36.137 { 00:10:36.137 "name": "pt1", 00:10:36.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.137 "is_configured": true, 00:10:36.137 "data_offset": 2048, 00:10:36.137 "data_size": 63488 00:10:36.137 }, 00:10:36.137 { 00:10:36.137 "name": "pt2", 00:10:36.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.137 "is_configured": true, 00:10:36.137 "data_offset": 2048, 00:10:36.137 "data_size": 63488 00:10:36.137 }, 00:10:36.137 { 00:10:36.137 "name": "pt3", 00:10:36.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.137 "is_configured": true, 00:10:36.137 "data_offset": 2048, 00:10:36.137 "data_size": 63488 00:10:36.137 } 00:10:36.137 ] 00:10:36.137 }' 00:10:36.137 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.137 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.706 [2024-11-17 01:30:44.920394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.706 "name": "raid_bdev1", 00:10:36.706 "aliases": [ 00:10:36.706 "092b84c0-e219-41e4-8fe8-295a51b0c7a3" 00:10:36.706 ], 00:10:36.706 "product_name": "Raid Volume", 00:10:36.706 "block_size": 512, 00:10:36.706 "num_blocks": 63488, 00:10:36.706 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:36.706 "assigned_rate_limits": { 00:10:36.706 "rw_ios_per_sec": 0, 00:10:36.706 "rw_mbytes_per_sec": 0, 00:10:36.706 "r_mbytes_per_sec": 0, 00:10:36.706 "w_mbytes_per_sec": 0 00:10:36.706 }, 00:10:36.706 "claimed": false, 00:10:36.706 "zoned": false, 00:10:36.706 "supported_io_types": { 00:10:36.706 "read": true, 00:10:36.706 "write": true, 00:10:36.706 "unmap": false, 00:10:36.706 "flush": false, 00:10:36.706 "reset": true, 00:10:36.706 "nvme_admin": false, 00:10:36.706 "nvme_io": false, 00:10:36.706 "nvme_io_md": false, 00:10:36.706 "write_zeroes": true, 00:10:36.706 "zcopy": false, 00:10:36.706 "get_zone_info": false, 00:10:36.706 "zone_management": false, 00:10:36.706 "zone_append": false, 00:10:36.706 "compare": false, 00:10:36.706 "compare_and_write": false, 00:10:36.706 "abort": false, 00:10:36.706 "seek_hole": false, 00:10:36.706 "seek_data": false, 00:10:36.706 "copy": false, 00:10:36.706 "nvme_iov_md": false 00:10:36.706 }, 00:10:36.706 "memory_domains": [ 00:10:36.706 { 00:10:36.706 "dma_device_id": "system", 00:10:36.706 "dma_device_type": 1 00:10:36.706 }, 00:10:36.706 { 00:10:36.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.706 "dma_device_type": 2 00:10:36.706 }, 00:10:36.706 { 00:10:36.706 "dma_device_id": "system", 00:10:36.706 "dma_device_type": 1 00:10:36.706 }, 00:10:36.706 { 00:10:36.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.706 "dma_device_type": 2 00:10:36.706 }, 00:10:36.706 { 00:10:36.706 "dma_device_id": "system", 00:10:36.706 "dma_device_type": 1 00:10:36.706 }, 00:10:36.706 { 00:10:36.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.706 "dma_device_type": 2 00:10:36.706 } 00:10:36.706 ], 00:10:36.706 "driver_specific": { 00:10:36.706 "raid": { 00:10:36.706 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:36.706 "strip_size_kb": 0, 00:10:36.706 "state": "online", 00:10:36.706 "raid_level": "raid1", 00:10:36.706 "superblock": true, 00:10:36.706 "num_base_bdevs": 3, 00:10:36.706 "num_base_bdevs_discovered": 3, 00:10:36.706 "num_base_bdevs_operational": 3, 00:10:36.706 "base_bdevs_list": [ 00:10:36.706 { 00:10:36.706 "name": "pt1", 00:10:36.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.706 "is_configured": true, 00:10:36.706 "data_offset": 2048, 00:10:36.706 "data_size": 63488 00:10:36.706 }, 00:10:36.706 { 00:10:36.706 "name": "pt2", 00:10:36.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.706 "is_configured": true, 00:10:36.706 "data_offset": 2048, 00:10:36.706 "data_size": 63488 00:10:36.706 }, 00:10:36.706 { 00:10:36.706 "name": "pt3", 00:10:36.706 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:36.706 "is_configured": true, 00:10:36.706 "data_offset": 2048, 00:10:36.706 "data_size": 63488 00:10:36.706 } 00:10:36.706 ] 00:10:36.706 } 00:10:36.706 } 00:10:36.706 }' 00:10:36.706 01:30:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:36.706 pt2 00:10:36.706 pt3' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.706 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:36.966 [2024-11-17 01:30:45.203982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=092b84c0-e219-41e4-8fe8-295a51b0c7a3 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 092b84c0-e219-41e4-8fe8-295a51b0c7a3 ']' 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 [2024-11-17 01:30:45.251552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.966 [2024-11-17 01:30:45.251584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.966 [2024-11-17 01:30:45.251668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.966 [2024-11-17 01:30:45.251748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.966 [2024-11-17 01:30:45.251781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.966 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.967 [2024-11-17 01:30:45.411334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:36.967 [2024-11-17 01:30:45.413478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:36.967 [2024-11-17 01:30:45.413600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:36.967 [2024-11-17 01:30:45.413679] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:36.967 [2024-11-17 01:30:45.413800] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:36.967 [2024-11-17 01:30:45.413874] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:36.967 [2024-11-17 01:30:45.413944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.967 [2024-11-17 01:30:45.413984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:36.967 request: 00:10:36.967 { 00:10:36.967 "name": "raid_bdev1", 00:10:36.967 "raid_level": "raid1", 00:10:36.967 "base_bdevs": [ 00:10:36.967 "malloc1", 00:10:36.967 "malloc2", 00:10:36.967 "malloc3" 00:10:36.967 ], 00:10:36.967 "superblock": false, 00:10:36.967 "method": "bdev_raid_create", 00:10:36.967 "req_id": 1 00:10:36.967 } 00:10:36.967 Got JSON-RPC error response 00:10:36.967 response: 00:10:36.967 { 00:10:36.967 "code": -17, 00:10:36.967 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:36.967 } 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:36.967 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.231 [2024-11-17 01:30:45.475164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.231 [2024-11-17 01:30:45.475266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.231 [2024-11-17 01:30:45.475309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:37.231 [2024-11-17 01:30:45.475366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.231 [2024-11-17 01:30:45.477659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.231 [2024-11-17 01:30:45.477734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.231 [2024-11-17 01:30:45.477866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:37.231 [2024-11-17 01:30:45.477945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.231 pt1 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.231 "name": "raid_bdev1", 00:10:37.231 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:37.231 "strip_size_kb": 0, 00:10:37.231 "state": "configuring", 00:10:37.231 "raid_level": "raid1", 00:10:37.231 "superblock": true, 00:10:37.231 "num_base_bdevs": 3, 00:10:37.231 "num_base_bdevs_discovered": 1, 00:10:37.231 "num_base_bdevs_operational": 3, 00:10:37.231 "base_bdevs_list": [ 00:10:37.231 { 00:10:37.231 "name": "pt1", 00:10:37.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.231 "is_configured": true, 00:10:37.231 "data_offset": 2048, 00:10:37.231 "data_size": 63488 00:10:37.231 }, 00:10:37.231 { 00:10:37.231 "name": null, 00:10:37.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.231 "is_configured": false, 00:10:37.231 "data_offset": 2048, 00:10:37.231 "data_size": 63488 00:10:37.231 }, 00:10:37.231 { 00:10:37.231 "name": null, 00:10:37.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.231 "is_configured": false, 00:10:37.231 "data_offset": 2048, 00:10:37.231 "data_size": 63488 00:10:37.231 } 00:10:37.231 ] 00:10:37.231 }' 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.231 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.492 [2024-11-17 01:30:45.926533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.492 [2024-11-17 01:30:45.926680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.492 [2024-11-17 01:30:45.926709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:37.492 [2024-11-17 01:30:45.926721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.492 [2024-11-17 01:30:45.927259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.492 [2024-11-17 01:30:45.927281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.492 [2024-11-17 01:30:45.927379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:37.492 [2024-11-17 01:30:45.927403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.492 pt2 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.492 [2024-11-17 01:30:45.938503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.492 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.752 "name": "raid_bdev1", 00:10:37.752 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:37.752 "strip_size_kb": 0, 00:10:37.752 "state": "configuring", 00:10:37.752 "raid_level": "raid1", 00:10:37.752 "superblock": true, 00:10:37.752 "num_base_bdevs": 3, 00:10:37.752 "num_base_bdevs_discovered": 1, 00:10:37.752 "num_base_bdevs_operational": 3, 00:10:37.752 "base_bdevs_list": [ 00:10:37.752 { 00:10:37.752 "name": "pt1", 00:10:37.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.752 "is_configured": true, 00:10:37.752 "data_offset": 2048, 00:10:37.752 "data_size": 63488 00:10:37.752 }, 00:10:37.752 { 00:10:37.752 "name": null, 00:10:37.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.752 "is_configured": false, 00:10:37.752 "data_offset": 0, 00:10:37.752 "data_size": 63488 00:10:37.752 }, 00:10:37.752 { 00:10:37.752 "name": null, 00:10:37.752 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.752 "is_configured": false, 00:10:37.752 "data_offset": 2048, 00:10:37.752 "data_size": 63488 00:10:37.752 } 00:10:37.752 ] 00:10:37.752 }' 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.752 01:30:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.012 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.013 [2024-11-17 01:30:46.397784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.013 [2024-11-17 01:30:46.397924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.013 [2024-11-17 01:30:46.397949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:38.013 [2024-11-17 01:30:46.397962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.013 [2024-11-17 01:30:46.398476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.013 [2024-11-17 01:30:46.398500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.013 [2024-11-17 01:30:46.398593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:38.013 [2024-11-17 01:30:46.398640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.013 pt2 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.013 [2024-11-17 01:30:46.405734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:38.013 [2024-11-17 01:30:46.405849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.013 [2024-11-17 01:30:46.405876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.013 [2024-11-17 01:30:46.405892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.013 [2024-11-17 01:30:46.406311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.013 [2024-11-17 01:30:46.406336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:38.013 [2024-11-17 01:30:46.406405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:38.013 [2024-11-17 01:30:46.406429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:38.013 [2024-11-17 01:30:46.406571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.013 [2024-11-17 01:30:46.406586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.013 [2024-11-17 01:30:46.406862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:38.013 [2024-11-17 01:30:46.407064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.013 [2024-11-17 01:30:46.407086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:38.013 [2024-11-17 01:30:46.407258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.013 pt3 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.013 "name": "raid_bdev1", 00:10:38.013 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:38.013 "strip_size_kb": 0, 00:10:38.013 "state": "online", 00:10:38.013 "raid_level": "raid1", 00:10:38.013 "superblock": true, 00:10:38.013 "num_base_bdevs": 3, 00:10:38.013 "num_base_bdevs_discovered": 3, 00:10:38.013 "num_base_bdevs_operational": 3, 00:10:38.013 "base_bdevs_list": [ 00:10:38.013 { 00:10:38.013 "name": "pt1", 00:10:38.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.013 "is_configured": true, 00:10:38.013 "data_offset": 2048, 00:10:38.013 "data_size": 63488 00:10:38.013 }, 00:10:38.013 { 00:10:38.013 "name": "pt2", 00:10:38.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.013 "is_configured": true, 00:10:38.013 "data_offset": 2048, 00:10:38.013 "data_size": 63488 00:10:38.013 }, 00:10:38.013 { 00:10:38.013 "name": "pt3", 00:10:38.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.013 "is_configured": true, 00:10:38.013 "data_offset": 2048, 00:10:38.013 "data_size": 63488 00:10:38.013 } 00:10:38.013 ] 00:10:38.013 }' 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.013 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.582 [2024-11-17 01:30:46.853310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.582 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.582 "name": "raid_bdev1", 00:10:38.582 "aliases": [ 00:10:38.582 "092b84c0-e219-41e4-8fe8-295a51b0c7a3" 00:10:38.582 ], 00:10:38.582 "product_name": "Raid Volume", 00:10:38.582 "block_size": 512, 00:10:38.582 "num_blocks": 63488, 00:10:38.582 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:38.582 "assigned_rate_limits": { 00:10:38.582 "rw_ios_per_sec": 0, 00:10:38.582 "rw_mbytes_per_sec": 0, 00:10:38.582 "r_mbytes_per_sec": 0, 00:10:38.582 "w_mbytes_per_sec": 0 00:10:38.582 }, 00:10:38.582 "claimed": false, 00:10:38.582 "zoned": false, 00:10:38.582 "supported_io_types": { 00:10:38.582 "read": true, 00:10:38.582 "write": true, 00:10:38.582 "unmap": false, 00:10:38.582 "flush": false, 00:10:38.582 "reset": true, 00:10:38.582 "nvme_admin": false, 00:10:38.582 "nvme_io": false, 00:10:38.582 "nvme_io_md": false, 00:10:38.582 "write_zeroes": true, 00:10:38.582 "zcopy": false, 00:10:38.582 "get_zone_info": false, 00:10:38.582 "zone_management": false, 00:10:38.582 "zone_append": false, 00:10:38.582 "compare": false, 00:10:38.582 "compare_and_write": false, 00:10:38.582 "abort": false, 00:10:38.582 "seek_hole": false, 00:10:38.582 "seek_data": false, 00:10:38.582 "copy": false, 00:10:38.582 "nvme_iov_md": false 00:10:38.582 }, 00:10:38.582 "memory_domains": [ 00:10:38.582 { 00:10:38.582 "dma_device_id": "system", 00:10:38.582 "dma_device_type": 1 00:10:38.582 }, 00:10:38.582 { 00:10:38.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.582 "dma_device_type": 2 00:10:38.582 }, 00:10:38.582 { 00:10:38.582 "dma_device_id": "system", 00:10:38.582 "dma_device_type": 1 00:10:38.582 }, 00:10:38.582 { 00:10:38.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.582 "dma_device_type": 2 00:10:38.582 }, 00:10:38.582 { 00:10:38.582 "dma_device_id": "system", 00:10:38.582 "dma_device_type": 1 00:10:38.582 }, 00:10:38.582 { 00:10:38.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.582 "dma_device_type": 2 00:10:38.582 } 00:10:38.582 ], 00:10:38.582 "driver_specific": { 00:10:38.582 "raid": { 00:10:38.582 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:38.582 "strip_size_kb": 0, 00:10:38.582 "state": "online", 00:10:38.582 "raid_level": "raid1", 00:10:38.582 "superblock": true, 00:10:38.582 "num_base_bdevs": 3, 00:10:38.582 "num_base_bdevs_discovered": 3, 00:10:38.582 "num_base_bdevs_operational": 3, 00:10:38.582 "base_bdevs_list": [ 00:10:38.582 { 00:10:38.582 "name": "pt1", 00:10:38.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.582 "is_configured": true, 00:10:38.583 "data_offset": 2048, 00:10:38.583 "data_size": 63488 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "name": "pt2", 00:10:38.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.583 "is_configured": true, 00:10:38.583 "data_offset": 2048, 00:10:38.583 "data_size": 63488 00:10:38.583 }, 00:10:38.583 { 00:10:38.583 "name": "pt3", 00:10:38.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.583 "is_configured": true, 00:10:38.583 "data_offset": 2048, 00:10:38.583 "data_size": 63488 00:10:38.583 } 00:10:38.583 ] 00:10:38.583 } 00:10:38.583 } 00:10:38.583 }' 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:38.583 pt2 00:10:38.583 pt3' 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.583 01:30:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.583 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.583 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.583 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.583 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.583 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.583 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.583 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:38.843 [2024-11-17 01:30:47.136794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 092b84c0-e219-41e4-8fe8-295a51b0c7a3 '!=' 092b84c0-e219-41e4-8fe8-295a51b0c7a3 ']' 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 [2024-11-17 01:30:47.184489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.843 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.843 "name": "raid_bdev1", 00:10:38.843 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:38.843 "strip_size_kb": 0, 00:10:38.843 "state": "online", 00:10:38.843 "raid_level": "raid1", 00:10:38.843 "superblock": true, 00:10:38.843 "num_base_bdevs": 3, 00:10:38.843 "num_base_bdevs_discovered": 2, 00:10:38.843 "num_base_bdevs_operational": 2, 00:10:38.843 "base_bdevs_list": [ 00:10:38.843 { 00:10:38.843 "name": null, 00:10:38.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.843 "is_configured": false, 00:10:38.843 "data_offset": 0, 00:10:38.843 "data_size": 63488 00:10:38.843 }, 00:10:38.843 { 00:10:38.843 "name": "pt2", 00:10:38.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.843 "is_configured": true, 00:10:38.843 "data_offset": 2048, 00:10:38.843 "data_size": 63488 00:10:38.843 }, 00:10:38.843 { 00:10:38.843 "name": "pt3", 00:10:38.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.843 "is_configured": true, 00:10:38.843 "data_offset": 2048, 00:10:38.843 "data_size": 63488 00:10:38.843 } 00:10:38.844 ] 00:10:38.844 }' 00:10:38.844 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.844 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.414 [2024-11-17 01:30:47.683610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:39.414 [2024-11-17 01:30:47.683710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.414 [2024-11-17 01:30:47.683831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.414 [2024-11-17 01:30:47.683914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.414 [2024-11-17 01:30:47.683981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.414 [2024-11-17 01:30:47.767427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.414 [2024-11-17 01:30:47.767487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.414 [2024-11-17 01:30:47.767507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:39.414 [2024-11-17 01:30:47.767519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.414 [2024-11-17 01:30:47.770005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.414 [2024-11-17 01:30:47.770098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.414 [2024-11-17 01:30:47.770195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.414 [2024-11-17 01:30:47.770256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.414 pt2 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.414 "name": "raid_bdev1", 00:10:39.414 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:39.414 "strip_size_kb": 0, 00:10:39.414 "state": "configuring", 00:10:39.414 "raid_level": "raid1", 00:10:39.414 "superblock": true, 00:10:39.414 "num_base_bdevs": 3, 00:10:39.414 "num_base_bdevs_discovered": 1, 00:10:39.414 "num_base_bdevs_operational": 2, 00:10:39.414 "base_bdevs_list": [ 00:10:39.414 { 00:10:39.414 "name": null, 00:10:39.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.414 "is_configured": false, 00:10:39.414 "data_offset": 2048, 00:10:39.414 "data_size": 63488 00:10:39.414 }, 00:10:39.414 { 00:10:39.414 "name": "pt2", 00:10:39.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.414 "is_configured": true, 00:10:39.414 "data_offset": 2048, 00:10:39.414 "data_size": 63488 00:10:39.414 }, 00:10:39.414 { 00:10:39.414 "name": null, 00:10:39.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.414 "is_configured": false, 00:10:39.414 "data_offset": 2048, 00:10:39.414 "data_size": 63488 00:10:39.414 } 00:10:39.414 ] 00:10:39.414 }' 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.414 01:30:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.983 [2024-11-17 01:30:48.230776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:39.983 [2024-11-17 01:30:48.230916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.983 [2024-11-17 01:30:48.230971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:39.983 [2024-11-17 01:30:48.231017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.983 [2024-11-17 01:30:48.231566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.983 [2024-11-17 01:30:48.231638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:39.983 [2024-11-17 01:30:48.231793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:39.983 [2024-11-17 01:30:48.231863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:39.983 [2024-11-17 01:30:48.232052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:39.983 [2024-11-17 01:30:48.232100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:39.983 [2024-11-17 01:30:48.232413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:39.983 [2024-11-17 01:30:48.232618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:39.983 [2024-11-17 01:30:48.232663] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:39.983 [2024-11-17 01:30:48.232917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.983 pt3 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.983 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.984 "name": "raid_bdev1", 00:10:39.984 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:39.984 "strip_size_kb": 0, 00:10:39.984 "state": "online", 00:10:39.984 "raid_level": "raid1", 00:10:39.984 "superblock": true, 00:10:39.984 "num_base_bdevs": 3, 00:10:39.984 "num_base_bdevs_discovered": 2, 00:10:39.984 "num_base_bdevs_operational": 2, 00:10:39.984 "base_bdevs_list": [ 00:10:39.984 { 00:10:39.984 "name": null, 00:10:39.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.984 "is_configured": false, 00:10:39.984 "data_offset": 2048, 00:10:39.984 "data_size": 63488 00:10:39.984 }, 00:10:39.984 { 00:10:39.984 "name": "pt2", 00:10:39.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.984 "is_configured": true, 00:10:39.984 "data_offset": 2048, 00:10:39.984 "data_size": 63488 00:10:39.984 }, 00:10:39.984 { 00:10:39.984 "name": "pt3", 00:10:39.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.984 "is_configured": true, 00:10:39.984 "data_offset": 2048, 00:10:39.984 "data_size": 63488 00:10:39.984 } 00:10:39.984 ] 00:10:39.984 }' 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.984 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.243 [2024-11-17 01:30:48.669980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.243 [2024-11-17 01:30:48.670015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.243 [2024-11-17 01:30:48.670095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.243 [2024-11-17 01:30:48.670156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.243 [2024-11-17 01:30:48.670165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.243 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.502 [2024-11-17 01:30:48.741897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:40.502 [2024-11-17 01:30:48.741956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.502 [2024-11-17 01:30:48.741978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:40.502 [2024-11-17 01:30:48.741986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.502 [2024-11-17 01:30:48.744395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.502 [2024-11-17 01:30:48.744435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:40.502 [2024-11-17 01:30:48.744518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:40.502 [2024-11-17 01:30:48.744569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:40.502 [2024-11-17 01:30:48.744691] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:40.502 [2024-11-17 01:30:48.744701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.502 [2024-11-17 01:30:48.744718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:40.502 [2024-11-17 01:30:48.744790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:40.502 pt1 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.502 "name": "raid_bdev1", 00:10:40.502 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:40.502 "strip_size_kb": 0, 00:10:40.502 "state": "configuring", 00:10:40.502 "raid_level": "raid1", 00:10:40.502 "superblock": true, 00:10:40.502 "num_base_bdevs": 3, 00:10:40.502 "num_base_bdevs_discovered": 1, 00:10:40.502 "num_base_bdevs_operational": 2, 00:10:40.502 "base_bdevs_list": [ 00:10:40.502 { 00:10:40.502 "name": null, 00:10:40.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.502 "is_configured": false, 00:10:40.502 "data_offset": 2048, 00:10:40.502 "data_size": 63488 00:10:40.502 }, 00:10:40.502 { 00:10:40.502 "name": "pt2", 00:10:40.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.502 "is_configured": true, 00:10:40.502 "data_offset": 2048, 00:10:40.502 "data_size": 63488 00:10:40.502 }, 00:10:40.502 { 00:10:40.502 "name": null, 00:10:40.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.502 "is_configured": false, 00:10:40.502 "data_offset": 2048, 00:10:40.502 "data_size": 63488 00:10:40.502 } 00:10:40.502 ] 00:10:40.502 }' 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.502 01:30:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:40.771 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:40.771 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.771 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.771 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.048 [2024-11-17 01:30:49.257034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:41.048 [2024-11-17 01:30:49.257170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.048 [2024-11-17 01:30:49.257243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:41.048 [2024-11-17 01:30:49.257284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.048 [2024-11-17 01:30:49.257852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.048 [2024-11-17 01:30:49.257920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:41.048 [2024-11-17 01:30:49.258049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:41.048 [2024-11-17 01:30:49.258135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:41.048 [2024-11-17 01:30:49.258330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:41.048 [2024-11-17 01:30:49.258374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:41.048 [2024-11-17 01:30:49.258682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:41.048 [2024-11-17 01:30:49.258912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:41.048 [2024-11-17 01:30:49.258967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:41.048 [2024-11-17 01:30:49.259190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.048 pt3 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.048 "name": "raid_bdev1", 00:10:41.048 "uuid": "092b84c0-e219-41e4-8fe8-295a51b0c7a3", 00:10:41.048 "strip_size_kb": 0, 00:10:41.048 "state": "online", 00:10:41.048 "raid_level": "raid1", 00:10:41.048 "superblock": true, 00:10:41.048 "num_base_bdevs": 3, 00:10:41.048 "num_base_bdevs_discovered": 2, 00:10:41.048 "num_base_bdevs_operational": 2, 00:10:41.048 "base_bdevs_list": [ 00:10:41.048 { 00:10:41.048 "name": null, 00:10:41.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.048 "is_configured": false, 00:10:41.048 "data_offset": 2048, 00:10:41.048 "data_size": 63488 00:10:41.048 }, 00:10:41.048 { 00:10:41.048 "name": "pt2", 00:10:41.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.048 "is_configured": true, 00:10:41.048 "data_offset": 2048, 00:10:41.048 "data_size": 63488 00:10:41.048 }, 00:10:41.048 { 00:10:41.048 "name": "pt3", 00:10:41.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.048 "is_configured": true, 00:10:41.048 "data_offset": 2048, 00:10:41.048 "data_size": 63488 00:10:41.048 } 00:10:41.048 ] 00:10:41.048 }' 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.048 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.308 [2024-11-17 01:30:49.740691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 092b84c0-e219-41e4-8fe8-295a51b0c7a3 '!=' 092b84c0-e219-41e4-8fe8-295a51b0c7a3 ']' 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68445 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68445 ']' 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68445 00:10:41.308 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:41.568 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.568 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68445 00:10:41.568 killing process with pid 68445 00:10:41.568 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.568 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.568 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68445' 00:10:41.568 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68445 00:10:41.568 [2024-11-17 01:30:49.801504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.568 [2024-11-17 01:30:49.801600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.568 [2024-11-17 01:30:49.801659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.568 [2024-11-17 01:30:49.801671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:41.568 01:30:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68445 00:10:41.828 [2024-11-17 01:30:50.157364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.209 01:30:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:43.209 ************************************ 00:10:43.209 END TEST raid_superblock_test 00:10:43.209 ************************************ 00:10:43.209 00:10:43.209 real 0m8.125s 00:10:43.209 user 0m12.624s 00:10:43.209 sys 0m1.434s 00:10:43.209 01:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.209 01:30:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.209 01:30:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:43.209 01:30:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.209 01:30:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.209 01:30:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.209 ************************************ 00:10:43.209 START TEST raid_read_error_test 00:10:43.209 ************************************ 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fU6BSmJGDk 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68891 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68891 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68891 ']' 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.209 01:30:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.210 [2024-11-17 01:30:51.596393] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:43.210 [2024-11-17 01:30:51.596636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68891 ] 00:10:43.470 [2024-11-17 01:30:51.780643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.470 [2024-11-17 01:30:51.910049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.730 [2024-11-17 01:30:52.131192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.730 [2024-11-17 01:30:52.131233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.990 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.990 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.990 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.990 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.990 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.990 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.250 BaseBdev1_malloc 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.250 true 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.250 [2024-11-17 01:30:52.507566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:44.250 [2024-11-17 01:30:52.507635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.250 [2024-11-17 01:30:52.507661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:44.250 [2024-11-17 01:30:52.507674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.250 [2024-11-17 01:30:52.510111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.250 [2024-11-17 01:30:52.510217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:44.250 BaseBdev1 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.250 BaseBdev2_malloc 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.250 true 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.250 [2024-11-17 01:30:52.575506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:44.250 [2024-11-17 01:30:52.575562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.250 [2024-11-17 01:30:52.575579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:44.250 [2024-11-17 01:30:52.575590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.250 [2024-11-17 01:30:52.577738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.250 [2024-11-17 01:30:52.577790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:44.250 BaseBdev2 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.250 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.250 BaseBdev3_malloc 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.251 true 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.251 [2024-11-17 01:30:52.657856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:44.251 [2024-11-17 01:30:52.657979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.251 [2024-11-17 01:30:52.658006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:44.251 [2024-11-17 01:30:52.658020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.251 [2024-11-17 01:30:52.660598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.251 [2024-11-17 01:30:52.660645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:44.251 BaseBdev3 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.251 [2024-11-17 01:30:52.669916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.251 [2024-11-17 01:30:52.671984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.251 [2024-11-17 01:30:52.672070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.251 [2024-11-17 01:30:52.672307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:44.251 [2024-11-17 01:30:52.672321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:44.251 [2024-11-17 01:30:52.672605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:44.251 [2024-11-17 01:30:52.672798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:44.251 [2024-11-17 01:30:52.672814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:44.251 [2024-11-17 01:30:52.672990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.251 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.511 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.511 "name": "raid_bdev1", 00:10:44.511 "uuid": "3529efa6-ee35-4c72-b6c7-3466b14fdad4", 00:10:44.511 "strip_size_kb": 0, 00:10:44.511 "state": "online", 00:10:44.511 "raid_level": "raid1", 00:10:44.511 "superblock": true, 00:10:44.511 "num_base_bdevs": 3, 00:10:44.511 "num_base_bdevs_discovered": 3, 00:10:44.511 "num_base_bdevs_operational": 3, 00:10:44.511 "base_bdevs_list": [ 00:10:44.511 { 00:10:44.511 "name": "BaseBdev1", 00:10:44.511 "uuid": "6ece68e4-ad2e-5b52-9c78-dabeb3855545", 00:10:44.511 "is_configured": true, 00:10:44.511 "data_offset": 2048, 00:10:44.511 "data_size": 63488 00:10:44.511 }, 00:10:44.511 { 00:10:44.511 "name": "BaseBdev2", 00:10:44.511 "uuid": "f90a4f25-fdba-5c1a-b8cf-37d39232155c", 00:10:44.511 "is_configured": true, 00:10:44.511 "data_offset": 2048, 00:10:44.511 "data_size": 63488 00:10:44.511 }, 00:10:44.511 { 00:10:44.511 "name": "BaseBdev3", 00:10:44.511 "uuid": "c05c8c10-08e2-5b62-ab76-588045d7bf58", 00:10:44.511 "is_configured": true, 00:10:44.511 "data_offset": 2048, 00:10:44.511 "data_size": 63488 00:10:44.511 } 00:10:44.511 ] 00:10:44.511 }' 00:10:44.511 01:30:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.511 01:30:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.771 01:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:44.771 01:30:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:45.030 [2024-11-17 01:30:53.250135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.968 "name": "raid_bdev1", 00:10:45.968 "uuid": "3529efa6-ee35-4c72-b6c7-3466b14fdad4", 00:10:45.968 "strip_size_kb": 0, 00:10:45.968 "state": "online", 00:10:45.968 "raid_level": "raid1", 00:10:45.968 "superblock": true, 00:10:45.968 "num_base_bdevs": 3, 00:10:45.968 "num_base_bdevs_discovered": 3, 00:10:45.968 "num_base_bdevs_operational": 3, 00:10:45.968 "base_bdevs_list": [ 00:10:45.968 { 00:10:45.968 "name": "BaseBdev1", 00:10:45.968 "uuid": "6ece68e4-ad2e-5b52-9c78-dabeb3855545", 00:10:45.968 "is_configured": true, 00:10:45.968 "data_offset": 2048, 00:10:45.968 "data_size": 63488 00:10:45.968 }, 00:10:45.968 { 00:10:45.968 "name": "BaseBdev2", 00:10:45.968 "uuid": "f90a4f25-fdba-5c1a-b8cf-37d39232155c", 00:10:45.968 "is_configured": true, 00:10:45.968 "data_offset": 2048, 00:10:45.968 "data_size": 63488 00:10:45.968 }, 00:10:45.968 { 00:10:45.968 "name": "BaseBdev3", 00:10:45.968 "uuid": "c05c8c10-08e2-5b62-ab76-588045d7bf58", 00:10:45.968 "is_configured": true, 00:10:45.968 "data_offset": 2048, 00:10:45.968 "data_size": 63488 00:10:45.968 } 00:10:45.968 ] 00:10:45.968 }' 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.968 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.227 [2024-11-17 01:30:54.646240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.227 [2024-11-17 01:30:54.646344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.227 [2024-11-17 01:30:54.648881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.227 [2024-11-17 01:30:54.648985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.227 [2024-11-17 01:30:54.649103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.227 [2024-11-17 01:30:54.649158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:46.227 { 00:10:46.227 "results": [ 00:10:46.227 { 00:10:46.227 "job": "raid_bdev1", 00:10:46.227 "core_mask": "0x1", 00:10:46.227 "workload": "randrw", 00:10:46.227 "percentage": 50, 00:10:46.227 "status": "finished", 00:10:46.227 "queue_depth": 1, 00:10:46.227 "io_size": 131072, 00:10:46.227 "runtime": 1.396938, 00:10:46.227 "iops": 13475.18644349284, 00:10:46.227 "mibps": 1684.398305436605, 00:10:46.227 "io_failed": 0, 00:10:46.227 "io_timeout": 0, 00:10:46.227 "avg_latency_us": 71.61754074052078, 00:10:46.227 "min_latency_us": 22.581659388646287, 00:10:46.227 "max_latency_us": 1473.844541484716 00:10:46.227 } 00:10:46.227 ], 00:10:46.227 "core_count": 1 00:10:46.227 } 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68891 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68891 ']' 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68891 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.227 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68891 00:10:46.486 killing process with pid 68891 00:10:46.486 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.486 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.486 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68891' 00:10:46.486 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68891 00:10:46.486 [2024-11-17 01:30:54.696841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.486 01:30:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68891 00:10:46.486 [2024-11-17 01:30:54.914133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.862 01:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fU6BSmJGDk 00:10:47.862 01:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:47.862 01:30:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:47.862 01:30:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:47.862 01:30:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:47.862 ************************************ 00:10:47.862 END TEST raid_read_error_test 00:10:47.862 ************************************ 00:10:47.862 01:30:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.863 01:30:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:47.863 01:30:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:47.863 00:10:47.863 real 0m4.522s 00:10:47.863 user 0m5.430s 00:10:47.863 sys 0m0.599s 00:10:47.863 01:30:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.863 01:30:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.863 01:30:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:47.863 01:30:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.863 01:30:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.863 01:30:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.863 ************************************ 00:10:47.863 START TEST raid_write_error_test 00:10:47.863 ************************************ 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ugf6YrWNYW 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69031 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69031 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69031 ']' 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.863 01:30:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.863 [2024-11-17 01:30:56.184487] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:47.863 [2024-11-17 01:30:56.184677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69031 ] 00:10:48.122 [2024-11-17 01:30:56.356680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.122 [2024-11-17 01:30:56.469945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.381 [2024-11-17 01:30:56.659170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.381 [2024-11-17 01:30:56.659232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.640 BaseBdev1_malloc 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.640 true 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.640 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 [2024-11-17 01:30:57.099543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:48.900 [2024-11-17 01:30:57.099598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.900 [2024-11-17 01:30:57.099616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:48.900 [2024-11-17 01:30:57.099626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.900 [2024-11-17 01:30:57.101752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.900 [2024-11-17 01:30:57.101808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.900 BaseBdev1 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 BaseBdev2_malloc 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 true 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 [2024-11-17 01:30:57.163326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:48.900 [2024-11-17 01:30:57.163378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.900 [2024-11-17 01:30:57.163393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:48.900 [2024-11-17 01:30:57.163403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.900 [2024-11-17 01:30:57.165410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.900 [2024-11-17 01:30:57.165458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.900 BaseBdev2 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 BaseBdev3_malloc 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 true 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 [2024-11-17 01:30:57.264163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:48.900 [2024-11-17 01:30:57.264255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.900 [2024-11-17 01:30:57.264274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:48.900 [2024-11-17 01:30:57.264285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.900 [2024-11-17 01:30:57.266324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.900 [2024-11-17 01:30:57.266363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:48.900 BaseBdev3 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 [2024-11-17 01:30:57.276213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.900 [2024-11-17 01:30:57.277977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.900 [2024-11-17 01:30:57.278048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.900 [2024-11-17 01:30:57.278237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:48.900 [2024-11-17 01:30:57.278250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.900 [2024-11-17 01:30:57.278499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:48.900 [2024-11-17 01:30:57.278651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:48.900 [2024-11-17 01:30:57.278662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:48.900 [2024-11-17 01:30:57.278800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.900 "name": "raid_bdev1", 00:10:48.900 "uuid": "77781a95-42b2-4ffe-88c3-d2280194748b", 00:10:48.900 "strip_size_kb": 0, 00:10:48.900 "state": "online", 00:10:48.900 "raid_level": "raid1", 00:10:48.900 "superblock": true, 00:10:48.900 "num_base_bdevs": 3, 00:10:48.900 "num_base_bdevs_discovered": 3, 00:10:48.900 "num_base_bdevs_operational": 3, 00:10:48.900 "base_bdevs_list": [ 00:10:48.900 { 00:10:48.900 "name": "BaseBdev1", 00:10:48.900 "uuid": "8a74f06b-414a-5ffe-b35d-88cbed475703", 00:10:48.900 "is_configured": true, 00:10:48.900 "data_offset": 2048, 00:10:48.900 "data_size": 63488 00:10:48.900 }, 00:10:48.900 { 00:10:48.900 "name": "BaseBdev2", 00:10:48.900 "uuid": "372554b3-64e6-57fe-a253-858f7d11575b", 00:10:48.900 "is_configured": true, 00:10:48.900 "data_offset": 2048, 00:10:48.900 "data_size": 63488 00:10:48.900 }, 00:10:48.900 { 00:10:48.900 "name": "BaseBdev3", 00:10:48.900 "uuid": "8b29918e-6bc4-5149-a9c1-629e0d955dbb", 00:10:48.900 "is_configured": true, 00:10:48.900 "data_offset": 2048, 00:10:48.900 "data_size": 63488 00:10:48.900 } 00:10:48.900 ] 00:10:48.900 }' 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.900 01:30:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.469 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:49.469 01:30:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.469 [2024-11-17 01:30:57.832410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.407 [2024-11-17 01:30:58.743748] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:50.407 [2024-11-17 01:30:58.743915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.407 [2024-11-17 01:30:58.744175] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.407 "name": "raid_bdev1", 00:10:50.407 "uuid": "77781a95-42b2-4ffe-88c3-d2280194748b", 00:10:50.407 "strip_size_kb": 0, 00:10:50.407 "state": "online", 00:10:50.407 "raid_level": "raid1", 00:10:50.407 "superblock": true, 00:10:50.407 "num_base_bdevs": 3, 00:10:50.407 "num_base_bdevs_discovered": 2, 00:10:50.407 "num_base_bdevs_operational": 2, 00:10:50.407 "base_bdevs_list": [ 00:10:50.407 { 00:10:50.407 "name": null, 00:10:50.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.407 "is_configured": false, 00:10:50.407 "data_offset": 0, 00:10:50.407 "data_size": 63488 00:10:50.407 }, 00:10:50.407 { 00:10:50.407 "name": "BaseBdev2", 00:10:50.407 "uuid": "372554b3-64e6-57fe-a253-858f7d11575b", 00:10:50.407 "is_configured": true, 00:10:50.407 "data_offset": 2048, 00:10:50.407 "data_size": 63488 00:10:50.407 }, 00:10:50.407 { 00:10:50.407 "name": "BaseBdev3", 00:10:50.407 "uuid": "8b29918e-6bc4-5149-a9c1-629e0d955dbb", 00:10:50.407 "is_configured": true, 00:10:50.407 "data_offset": 2048, 00:10:50.407 "data_size": 63488 00:10:50.407 } 00:10:50.407 ] 00:10:50.407 }' 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.407 01:30:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.744 01:30:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.744 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.744 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 [2024-11-17 01:30:59.185776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.002 [2024-11-17 01:30:59.185813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.002 [2024-11-17 01:30:59.188497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.002 [2024-11-17 01:30:59.188562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.002 [2024-11-17 01:30:59.188643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.002 [2024-11-17 01:30:59.188658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:51.002 { 00:10:51.002 "results": [ 00:10:51.002 { 00:10:51.002 "job": "raid_bdev1", 00:10:51.002 "core_mask": "0x1", 00:10:51.002 "workload": "randrw", 00:10:51.002 "percentage": 50, 00:10:51.002 "status": "finished", 00:10:51.002 "queue_depth": 1, 00:10:51.002 "io_size": 131072, 00:10:51.002 "runtime": 1.354301, 00:10:51.002 "iops": 15273.561785747777, 00:10:51.002 "mibps": 1909.1952232184722, 00:10:51.002 "io_failed": 0, 00:10:51.002 "io_timeout": 0, 00:10:51.002 "avg_latency_us": 62.913982137975225, 00:10:51.002 "min_latency_us": 22.46986899563319, 00:10:51.002 "max_latency_us": 1352.216593886463 00:10:51.002 } 00:10:51.002 ], 00:10:51.002 "core_count": 1 00:10:51.002 } 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69031 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69031 ']' 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69031 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69031 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.002 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.003 killing process with pid 69031 00:10:51.003 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69031' 00:10:51.003 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69031 00:10:51.003 [2024-11-17 01:30:59.224929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.003 01:30:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69031 00:10:51.003 [2024-11-17 01:30:59.458012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ugf6YrWNYW 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:52.382 00:10:52.382 real 0m4.538s 00:10:52.382 user 0m5.407s 00:10:52.382 sys 0m0.565s 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.382 ************************************ 00:10:52.382 END TEST raid_write_error_test 00:10:52.382 ************************************ 00:10:52.382 01:31:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.382 01:31:00 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:52.382 01:31:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:52.382 01:31:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:52.382 01:31:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.382 01:31:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.382 01:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.382 ************************************ 00:10:52.382 START TEST raid_state_function_test 00:10:52.382 ************************************ 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69180 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69180' 00:10:52.382 Process raid pid: 69180 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69180 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69180 ']' 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.382 01:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.382 [2024-11-17 01:31:00.776842] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:52.382 [2024-11-17 01:31:00.776954] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.642 [2024-11-17 01:31:00.936488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.642 [2024-11-17 01:31:01.055143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.901 [2024-11-17 01:31:01.240375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.901 [2024-11-17 01:31:01.240412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.161 [2024-11-17 01:31:01.602602] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.161 [2024-11-17 01:31:01.602662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.161 [2024-11-17 01:31:01.602675] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.161 [2024-11-17 01:31:01.602686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.161 [2024-11-17 01:31:01.602694] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.161 [2024-11-17 01:31:01.602704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.161 [2024-11-17 01:31:01.602711] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.161 [2024-11-17 01:31:01.602720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.161 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.421 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.421 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.421 "name": "Existed_Raid", 00:10:53.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.421 "strip_size_kb": 64, 00:10:53.421 "state": "configuring", 00:10:53.421 "raid_level": "raid0", 00:10:53.421 "superblock": false, 00:10:53.421 "num_base_bdevs": 4, 00:10:53.421 "num_base_bdevs_discovered": 0, 00:10:53.421 "num_base_bdevs_operational": 4, 00:10:53.421 "base_bdevs_list": [ 00:10:53.421 { 00:10:53.421 "name": "BaseBdev1", 00:10:53.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.421 "is_configured": false, 00:10:53.421 "data_offset": 0, 00:10:53.421 "data_size": 0 00:10:53.421 }, 00:10:53.421 { 00:10:53.421 "name": "BaseBdev2", 00:10:53.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.421 "is_configured": false, 00:10:53.421 "data_offset": 0, 00:10:53.421 "data_size": 0 00:10:53.421 }, 00:10:53.421 { 00:10:53.421 "name": "BaseBdev3", 00:10:53.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.421 "is_configured": false, 00:10:53.421 "data_offset": 0, 00:10:53.421 "data_size": 0 00:10:53.421 }, 00:10:53.421 { 00:10:53.421 "name": "BaseBdev4", 00:10:53.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.421 "is_configured": false, 00:10:53.421 "data_offset": 0, 00:10:53.421 "data_size": 0 00:10:53.421 } 00:10:53.421 ] 00:10:53.421 }' 00:10:53.421 01:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.421 01:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.681 [2024-11-17 01:31:02.005923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.681 [2024-11-17 01:31:02.005973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.681 [2024-11-17 01:31:02.013899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.681 [2024-11-17 01:31:02.014005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.681 [2024-11-17 01:31:02.014035] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.681 [2024-11-17 01:31:02.014059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.681 [2024-11-17 01:31:02.014078] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.681 [2024-11-17 01:31:02.014100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.681 [2024-11-17 01:31:02.014118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.681 [2024-11-17 01:31:02.014140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.681 [2024-11-17 01:31:02.058701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.681 BaseBdev1 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.681 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.681 [ 00:10:53.681 { 00:10:53.681 "name": "BaseBdev1", 00:10:53.681 "aliases": [ 00:10:53.681 "f180b60c-391a-45c3-b0f2-e378958454b0" 00:10:53.681 ], 00:10:53.681 "product_name": "Malloc disk", 00:10:53.681 "block_size": 512, 00:10:53.681 "num_blocks": 65536, 00:10:53.681 "uuid": "f180b60c-391a-45c3-b0f2-e378958454b0", 00:10:53.681 "assigned_rate_limits": { 00:10:53.681 "rw_ios_per_sec": 0, 00:10:53.681 "rw_mbytes_per_sec": 0, 00:10:53.681 "r_mbytes_per_sec": 0, 00:10:53.681 "w_mbytes_per_sec": 0 00:10:53.681 }, 00:10:53.681 "claimed": true, 00:10:53.681 "claim_type": "exclusive_write", 00:10:53.681 "zoned": false, 00:10:53.681 "supported_io_types": { 00:10:53.681 "read": true, 00:10:53.681 "write": true, 00:10:53.681 "unmap": true, 00:10:53.681 "flush": true, 00:10:53.681 "reset": true, 00:10:53.681 "nvme_admin": false, 00:10:53.681 "nvme_io": false, 00:10:53.681 "nvme_io_md": false, 00:10:53.681 "write_zeroes": true, 00:10:53.681 "zcopy": true, 00:10:53.681 "get_zone_info": false, 00:10:53.681 "zone_management": false, 00:10:53.681 "zone_append": false, 00:10:53.681 "compare": false, 00:10:53.681 "compare_and_write": false, 00:10:53.681 "abort": true, 00:10:53.681 "seek_hole": false, 00:10:53.681 "seek_data": false, 00:10:53.681 "copy": true, 00:10:53.682 "nvme_iov_md": false 00:10:53.682 }, 00:10:53.682 "memory_domains": [ 00:10:53.682 { 00:10:53.682 "dma_device_id": "system", 00:10:53.682 "dma_device_type": 1 00:10:53.682 }, 00:10:53.682 { 00:10:53.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.682 "dma_device_type": 2 00:10:53.682 } 00:10:53.682 ], 00:10:53.682 "driver_specific": {} 00:10:53.682 } 00:10:53.682 ] 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.682 "name": "Existed_Raid", 00:10:53.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.682 "strip_size_kb": 64, 00:10:53.682 "state": "configuring", 00:10:53.682 "raid_level": "raid0", 00:10:53.682 "superblock": false, 00:10:53.682 "num_base_bdevs": 4, 00:10:53.682 "num_base_bdevs_discovered": 1, 00:10:53.682 "num_base_bdevs_operational": 4, 00:10:53.682 "base_bdevs_list": [ 00:10:53.682 { 00:10:53.682 "name": "BaseBdev1", 00:10:53.682 "uuid": "f180b60c-391a-45c3-b0f2-e378958454b0", 00:10:53.682 "is_configured": true, 00:10:53.682 "data_offset": 0, 00:10:53.682 "data_size": 65536 00:10:53.682 }, 00:10:53.682 { 00:10:53.682 "name": "BaseBdev2", 00:10:53.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.682 "is_configured": false, 00:10:53.682 "data_offset": 0, 00:10:53.682 "data_size": 0 00:10:53.682 }, 00:10:53.682 { 00:10:53.682 "name": "BaseBdev3", 00:10:53.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.682 "is_configured": false, 00:10:53.682 "data_offset": 0, 00:10:53.682 "data_size": 0 00:10:53.682 }, 00:10:53.682 { 00:10:53.682 "name": "BaseBdev4", 00:10:53.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.682 "is_configured": false, 00:10:53.682 "data_offset": 0, 00:10:53.682 "data_size": 0 00:10:53.682 } 00:10:53.682 ] 00:10:53.682 }' 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.682 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.251 [2024-11-17 01:31:02.517962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.251 [2024-11-17 01:31:02.518087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.251 [2024-11-17 01:31:02.525987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.251 [2024-11-17 01:31:02.527877] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.251 [2024-11-17 01:31:02.527959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.251 [2024-11-17 01:31:02.527988] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.251 [2024-11-17 01:31:02.528013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.251 [2024-11-17 01:31:02.528032] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.251 [2024-11-17 01:31:02.528054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.251 "name": "Existed_Raid", 00:10:54.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.251 "strip_size_kb": 64, 00:10:54.251 "state": "configuring", 00:10:54.251 "raid_level": "raid0", 00:10:54.251 "superblock": false, 00:10:54.251 "num_base_bdevs": 4, 00:10:54.251 "num_base_bdevs_discovered": 1, 00:10:54.251 "num_base_bdevs_operational": 4, 00:10:54.251 "base_bdevs_list": [ 00:10:54.251 { 00:10:54.251 "name": "BaseBdev1", 00:10:54.251 "uuid": "f180b60c-391a-45c3-b0f2-e378958454b0", 00:10:54.251 "is_configured": true, 00:10:54.251 "data_offset": 0, 00:10:54.251 "data_size": 65536 00:10:54.251 }, 00:10:54.251 { 00:10:54.251 "name": "BaseBdev2", 00:10:54.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.251 "is_configured": false, 00:10:54.251 "data_offset": 0, 00:10:54.251 "data_size": 0 00:10:54.251 }, 00:10:54.251 { 00:10:54.251 "name": "BaseBdev3", 00:10:54.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.251 "is_configured": false, 00:10:54.251 "data_offset": 0, 00:10:54.251 "data_size": 0 00:10:54.251 }, 00:10:54.251 { 00:10:54.251 "name": "BaseBdev4", 00:10:54.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.251 "is_configured": false, 00:10:54.251 "data_offset": 0, 00:10:54.251 "data_size": 0 00:10:54.251 } 00:10:54.251 ] 00:10:54.251 }' 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.251 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.510 [2024-11-17 01:31:02.954376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.510 BaseBdev2 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.510 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.769 [ 00:10:54.769 { 00:10:54.769 "name": "BaseBdev2", 00:10:54.769 "aliases": [ 00:10:54.769 "7fd8d63b-26c4-4fe4-9cd2-be8906c4e7f2" 00:10:54.769 ], 00:10:54.769 "product_name": "Malloc disk", 00:10:54.769 "block_size": 512, 00:10:54.769 "num_blocks": 65536, 00:10:54.769 "uuid": "7fd8d63b-26c4-4fe4-9cd2-be8906c4e7f2", 00:10:54.769 "assigned_rate_limits": { 00:10:54.769 "rw_ios_per_sec": 0, 00:10:54.769 "rw_mbytes_per_sec": 0, 00:10:54.769 "r_mbytes_per_sec": 0, 00:10:54.769 "w_mbytes_per_sec": 0 00:10:54.769 }, 00:10:54.769 "claimed": true, 00:10:54.769 "claim_type": "exclusive_write", 00:10:54.769 "zoned": false, 00:10:54.769 "supported_io_types": { 00:10:54.769 "read": true, 00:10:54.769 "write": true, 00:10:54.769 "unmap": true, 00:10:54.769 "flush": true, 00:10:54.769 "reset": true, 00:10:54.769 "nvme_admin": false, 00:10:54.769 "nvme_io": false, 00:10:54.769 "nvme_io_md": false, 00:10:54.769 "write_zeroes": true, 00:10:54.769 "zcopy": true, 00:10:54.769 "get_zone_info": false, 00:10:54.769 "zone_management": false, 00:10:54.769 "zone_append": false, 00:10:54.769 "compare": false, 00:10:54.769 "compare_and_write": false, 00:10:54.769 "abort": true, 00:10:54.769 "seek_hole": false, 00:10:54.769 "seek_data": false, 00:10:54.769 "copy": true, 00:10:54.769 "nvme_iov_md": false 00:10:54.769 }, 00:10:54.769 "memory_domains": [ 00:10:54.769 { 00:10:54.769 "dma_device_id": "system", 00:10:54.769 "dma_device_type": 1 00:10:54.769 }, 00:10:54.769 { 00:10:54.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.769 "dma_device_type": 2 00:10:54.769 } 00:10:54.769 ], 00:10:54.769 "driver_specific": {} 00:10:54.769 } 00:10:54.769 ] 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.769 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.770 01:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.770 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.770 "name": "Existed_Raid", 00:10:54.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.770 "strip_size_kb": 64, 00:10:54.770 "state": "configuring", 00:10:54.770 "raid_level": "raid0", 00:10:54.770 "superblock": false, 00:10:54.770 "num_base_bdevs": 4, 00:10:54.770 "num_base_bdevs_discovered": 2, 00:10:54.770 "num_base_bdevs_operational": 4, 00:10:54.770 "base_bdevs_list": [ 00:10:54.770 { 00:10:54.770 "name": "BaseBdev1", 00:10:54.770 "uuid": "f180b60c-391a-45c3-b0f2-e378958454b0", 00:10:54.770 "is_configured": true, 00:10:54.770 "data_offset": 0, 00:10:54.770 "data_size": 65536 00:10:54.770 }, 00:10:54.770 { 00:10:54.770 "name": "BaseBdev2", 00:10:54.770 "uuid": "7fd8d63b-26c4-4fe4-9cd2-be8906c4e7f2", 00:10:54.770 "is_configured": true, 00:10:54.770 "data_offset": 0, 00:10:54.770 "data_size": 65536 00:10:54.770 }, 00:10:54.770 { 00:10:54.770 "name": "BaseBdev3", 00:10:54.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.770 "is_configured": false, 00:10:54.770 "data_offset": 0, 00:10:54.770 "data_size": 0 00:10:54.770 }, 00:10:54.770 { 00:10:54.770 "name": "BaseBdev4", 00:10:54.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.770 "is_configured": false, 00:10:54.770 "data_offset": 0, 00:10:54.770 "data_size": 0 00:10:54.770 } 00:10:54.770 ] 00:10:54.770 }' 00:10:54.770 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.770 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.029 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.029 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.029 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.288 [2024-11-17 01:31:03.501845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.288 BaseBdev3 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.288 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.288 [ 00:10:55.288 { 00:10:55.288 "name": "BaseBdev3", 00:10:55.288 "aliases": [ 00:10:55.288 "608da631-1536-4a42-8909-40a0c8ce066e" 00:10:55.288 ], 00:10:55.288 "product_name": "Malloc disk", 00:10:55.288 "block_size": 512, 00:10:55.288 "num_blocks": 65536, 00:10:55.288 "uuid": "608da631-1536-4a42-8909-40a0c8ce066e", 00:10:55.288 "assigned_rate_limits": { 00:10:55.288 "rw_ios_per_sec": 0, 00:10:55.288 "rw_mbytes_per_sec": 0, 00:10:55.288 "r_mbytes_per_sec": 0, 00:10:55.288 "w_mbytes_per_sec": 0 00:10:55.288 }, 00:10:55.288 "claimed": true, 00:10:55.288 "claim_type": "exclusive_write", 00:10:55.288 "zoned": false, 00:10:55.288 "supported_io_types": { 00:10:55.288 "read": true, 00:10:55.288 "write": true, 00:10:55.288 "unmap": true, 00:10:55.288 "flush": true, 00:10:55.288 "reset": true, 00:10:55.288 "nvme_admin": false, 00:10:55.288 "nvme_io": false, 00:10:55.288 "nvme_io_md": false, 00:10:55.288 "write_zeroes": true, 00:10:55.288 "zcopy": true, 00:10:55.288 "get_zone_info": false, 00:10:55.288 "zone_management": false, 00:10:55.288 "zone_append": false, 00:10:55.288 "compare": false, 00:10:55.288 "compare_and_write": false, 00:10:55.288 "abort": true, 00:10:55.288 "seek_hole": false, 00:10:55.288 "seek_data": false, 00:10:55.288 "copy": true, 00:10:55.288 "nvme_iov_md": false 00:10:55.288 }, 00:10:55.288 "memory_domains": [ 00:10:55.288 { 00:10:55.288 "dma_device_id": "system", 00:10:55.288 "dma_device_type": 1 00:10:55.289 }, 00:10:55.289 { 00:10:55.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.289 "dma_device_type": 2 00:10:55.289 } 00:10:55.289 ], 00:10:55.289 "driver_specific": {} 00:10:55.289 } 00:10:55.289 ] 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.289 "name": "Existed_Raid", 00:10:55.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.289 "strip_size_kb": 64, 00:10:55.289 "state": "configuring", 00:10:55.289 "raid_level": "raid0", 00:10:55.289 "superblock": false, 00:10:55.289 "num_base_bdevs": 4, 00:10:55.289 "num_base_bdevs_discovered": 3, 00:10:55.289 "num_base_bdevs_operational": 4, 00:10:55.289 "base_bdevs_list": [ 00:10:55.289 { 00:10:55.289 "name": "BaseBdev1", 00:10:55.289 "uuid": "f180b60c-391a-45c3-b0f2-e378958454b0", 00:10:55.289 "is_configured": true, 00:10:55.289 "data_offset": 0, 00:10:55.289 "data_size": 65536 00:10:55.289 }, 00:10:55.289 { 00:10:55.289 "name": "BaseBdev2", 00:10:55.289 "uuid": "7fd8d63b-26c4-4fe4-9cd2-be8906c4e7f2", 00:10:55.289 "is_configured": true, 00:10:55.289 "data_offset": 0, 00:10:55.289 "data_size": 65536 00:10:55.289 }, 00:10:55.289 { 00:10:55.289 "name": "BaseBdev3", 00:10:55.289 "uuid": "608da631-1536-4a42-8909-40a0c8ce066e", 00:10:55.289 "is_configured": true, 00:10:55.289 "data_offset": 0, 00:10:55.289 "data_size": 65536 00:10:55.289 }, 00:10:55.289 { 00:10:55.289 "name": "BaseBdev4", 00:10:55.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.289 "is_configured": false, 00:10:55.289 "data_offset": 0, 00:10:55.289 "data_size": 0 00:10:55.289 } 00:10:55.289 ] 00:10:55.289 }' 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.289 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.548 01:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.548 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.548 01:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.807 [2024-11-17 01:31:04.007208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.807 [2024-11-17 01:31:04.007257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.807 [2024-11-17 01:31:04.007266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:55.807 [2024-11-17 01:31:04.007533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:55.807 [2024-11-17 01:31:04.007693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.807 [2024-11-17 01:31:04.007704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.807 [2024-11-17 01:31:04.007996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.807 BaseBdev4 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.807 [ 00:10:55.807 { 00:10:55.807 "name": "BaseBdev4", 00:10:55.807 "aliases": [ 00:10:55.807 "0d3fdc75-a699-4b38-b4f5-1d0003c8f638" 00:10:55.807 ], 00:10:55.807 "product_name": "Malloc disk", 00:10:55.807 "block_size": 512, 00:10:55.807 "num_blocks": 65536, 00:10:55.807 "uuid": "0d3fdc75-a699-4b38-b4f5-1d0003c8f638", 00:10:55.807 "assigned_rate_limits": { 00:10:55.807 "rw_ios_per_sec": 0, 00:10:55.807 "rw_mbytes_per_sec": 0, 00:10:55.807 "r_mbytes_per_sec": 0, 00:10:55.807 "w_mbytes_per_sec": 0 00:10:55.807 }, 00:10:55.807 "claimed": true, 00:10:55.807 "claim_type": "exclusive_write", 00:10:55.807 "zoned": false, 00:10:55.807 "supported_io_types": { 00:10:55.807 "read": true, 00:10:55.807 "write": true, 00:10:55.807 "unmap": true, 00:10:55.807 "flush": true, 00:10:55.807 "reset": true, 00:10:55.807 "nvme_admin": false, 00:10:55.807 "nvme_io": false, 00:10:55.807 "nvme_io_md": false, 00:10:55.807 "write_zeroes": true, 00:10:55.807 "zcopy": true, 00:10:55.807 "get_zone_info": false, 00:10:55.807 "zone_management": false, 00:10:55.807 "zone_append": false, 00:10:55.807 "compare": false, 00:10:55.807 "compare_and_write": false, 00:10:55.807 "abort": true, 00:10:55.807 "seek_hole": false, 00:10:55.807 "seek_data": false, 00:10:55.807 "copy": true, 00:10:55.807 "nvme_iov_md": false 00:10:55.807 }, 00:10:55.807 "memory_domains": [ 00:10:55.807 { 00:10:55.807 "dma_device_id": "system", 00:10:55.807 "dma_device_type": 1 00:10:55.807 }, 00:10:55.807 { 00:10:55.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.807 "dma_device_type": 2 00:10:55.807 } 00:10:55.807 ], 00:10:55.807 "driver_specific": {} 00:10:55.807 } 00:10:55.807 ] 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.807 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.808 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.808 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.808 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.808 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.808 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.808 "name": "Existed_Raid", 00:10:55.808 "uuid": "3ec2be55-7880-4a39-ade6-841742dfe74d", 00:10:55.808 "strip_size_kb": 64, 00:10:55.808 "state": "online", 00:10:55.808 "raid_level": "raid0", 00:10:55.808 "superblock": false, 00:10:55.808 "num_base_bdevs": 4, 00:10:55.808 "num_base_bdevs_discovered": 4, 00:10:55.808 "num_base_bdevs_operational": 4, 00:10:55.808 "base_bdevs_list": [ 00:10:55.808 { 00:10:55.808 "name": "BaseBdev1", 00:10:55.808 "uuid": "f180b60c-391a-45c3-b0f2-e378958454b0", 00:10:55.808 "is_configured": true, 00:10:55.808 "data_offset": 0, 00:10:55.808 "data_size": 65536 00:10:55.808 }, 00:10:55.808 { 00:10:55.808 "name": "BaseBdev2", 00:10:55.808 "uuid": "7fd8d63b-26c4-4fe4-9cd2-be8906c4e7f2", 00:10:55.808 "is_configured": true, 00:10:55.808 "data_offset": 0, 00:10:55.808 "data_size": 65536 00:10:55.808 }, 00:10:55.808 { 00:10:55.808 "name": "BaseBdev3", 00:10:55.808 "uuid": "608da631-1536-4a42-8909-40a0c8ce066e", 00:10:55.808 "is_configured": true, 00:10:55.808 "data_offset": 0, 00:10:55.808 "data_size": 65536 00:10:55.808 }, 00:10:55.808 { 00:10:55.808 "name": "BaseBdev4", 00:10:55.808 "uuid": "0d3fdc75-a699-4b38-b4f5-1d0003c8f638", 00:10:55.808 "is_configured": true, 00:10:55.808 "data_offset": 0, 00:10:55.808 "data_size": 65536 00:10:55.808 } 00:10:55.808 ] 00:10:55.808 }' 00:10:55.808 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.808 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.067 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.068 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.068 [2024-11-17 01:31:04.459046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.068 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.068 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.068 "name": "Existed_Raid", 00:10:56.068 "aliases": [ 00:10:56.068 "3ec2be55-7880-4a39-ade6-841742dfe74d" 00:10:56.068 ], 00:10:56.068 "product_name": "Raid Volume", 00:10:56.068 "block_size": 512, 00:10:56.068 "num_blocks": 262144, 00:10:56.068 "uuid": "3ec2be55-7880-4a39-ade6-841742dfe74d", 00:10:56.068 "assigned_rate_limits": { 00:10:56.068 "rw_ios_per_sec": 0, 00:10:56.068 "rw_mbytes_per_sec": 0, 00:10:56.068 "r_mbytes_per_sec": 0, 00:10:56.068 "w_mbytes_per_sec": 0 00:10:56.068 }, 00:10:56.068 "claimed": false, 00:10:56.068 "zoned": false, 00:10:56.068 "supported_io_types": { 00:10:56.068 "read": true, 00:10:56.068 "write": true, 00:10:56.068 "unmap": true, 00:10:56.068 "flush": true, 00:10:56.068 "reset": true, 00:10:56.068 "nvme_admin": false, 00:10:56.068 "nvme_io": false, 00:10:56.068 "nvme_io_md": false, 00:10:56.068 "write_zeroes": true, 00:10:56.068 "zcopy": false, 00:10:56.068 "get_zone_info": false, 00:10:56.068 "zone_management": false, 00:10:56.068 "zone_append": false, 00:10:56.068 "compare": false, 00:10:56.068 "compare_and_write": false, 00:10:56.068 "abort": false, 00:10:56.068 "seek_hole": false, 00:10:56.068 "seek_data": false, 00:10:56.068 "copy": false, 00:10:56.068 "nvme_iov_md": false 00:10:56.068 }, 00:10:56.068 "memory_domains": [ 00:10:56.068 { 00:10:56.068 "dma_device_id": "system", 00:10:56.068 "dma_device_type": 1 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.068 "dma_device_type": 2 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "dma_device_id": "system", 00:10:56.068 "dma_device_type": 1 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.068 "dma_device_type": 2 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "dma_device_id": "system", 00:10:56.068 "dma_device_type": 1 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.068 "dma_device_type": 2 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "dma_device_id": "system", 00:10:56.068 "dma_device_type": 1 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.068 "dma_device_type": 2 00:10:56.068 } 00:10:56.068 ], 00:10:56.068 "driver_specific": { 00:10:56.068 "raid": { 00:10:56.068 "uuid": "3ec2be55-7880-4a39-ade6-841742dfe74d", 00:10:56.068 "strip_size_kb": 64, 00:10:56.068 "state": "online", 00:10:56.068 "raid_level": "raid0", 00:10:56.068 "superblock": false, 00:10:56.068 "num_base_bdevs": 4, 00:10:56.068 "num_base_bdevs_discovered": 4, 00:10:56.068 "num_base_bdevs_operational": 4, 00:10:56.068 "base_bdevs_list": [ 00:10:56.068 { 00:10:56.068 "name": "BaseBdev1", 00:10:56.068 "uuid": "f180b60c-391a-45c3-b0f2-e378958454b0", 00:10:56.068 "is_configured": true, 00:10:56.068 "data_offset": 0, 00:10:56.068 "data_size": 65536 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "name": "BaseBdev2", 00:10:56.068 "uuid": "7fd8d63b-26c4-4fe4-9cd2-be8906c4e7f2", 00:10:56.068 "is_configured": true, 00:10:56.068 "data_offset": 0, 00:10:56.068 "data_size": 65536 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "name": "BaseBdev3", 00:10:56.068 "uuid": "608da631-1536-4a42-8909-40a0c8ce066e", 00:10:56.068 "is_configured": true, 00:10:56.068 "data_offset": 0, 00:10:56.068 "data_size": 65536 00:10:56.068 }, 00:10:56.068 { 00:10:56.068 "name": "BaseBdev4", 00:10:56.068 "uuid": "0d3fdc75-a699-4b38-b4f5-1d0003c8f638", 00:10:56.068 "is_configured": true, 00:10:56.068 "data_offset": 0, 00:10:56.068 "data_size": 65536 00:10:56.068 } 00:10:56.068 ] 00:10:56.068 } 00:10:56.068 } 00:10:56.068 }' 00:10:56.068 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.328 BaseBdev2 00:10:56.328 BaseBdev3 00:10:56.328 BaseBdev4' 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.328 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.329 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.329 [2024-11-17 01:31:04.778169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.329 [2024-11-17 01:31:04.778274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.329 [2024-11-17 01:31:04.778357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.588 "name": "Existed_Raid", 00:10:56.588 "uuid": "3ec2be55-7880-4a39-ade6-841742dfe74d", 00:10:56.588 "strip_size_kb": 64, 00:10:56.588 "state": "offline", 00:10:56.588 "raid_level": "raid0", 00:10:56.588 "superblock": false, 00:10:56.588 "num_base_bdevs": 4, 00:10:56.588 "num_base_bdevs_discovered": 3, 00:10:56.588 "num_base_bdevs_operational": 3, 00:10:56.588 "base_bdevs_list": [ 00:10:56.588 { 00:10:56.588 "name": null, 00:10:56.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.588 "is_configured": false, 00:10:56.588 "data_offset": 0, 00:10:56.588 "data_size": 65536 00:10:56.588 }, 00:10:56.588 { 00:10:56.588 "name": "BaseBdev2", 00:10:56.588 "uuid": "7fd8d63b-26c4-4fe4-9cd2-be8906c4e7f2", 00:10:56.588 "is_configured": true, 00:10:56.588 "data_offset": 0, 00:10:56.588 "data_size": 65536 00:10:56.588 }, 00:10:56.588 { 00:10:56.588 "name": "BaseBdev3", 00:10:56.588 "uuid": "608da631-1536-4a42-8909-40a0c8ce066e", 00:10:56.588 "is_configured": true, 00:10:56.588 "data_offset": 0, 00:10:56.588 "data_size": 65536 00:10:56.588 }, 00:10:56.588 { 00:10:56.588 "name": "BaseBdev4", 00:10:56.588 "uuid": "0d3fdc75-a699-4b38-b4f5-1d0003c8f638", 00:10:56.588 "is_configured": true, 00:10:56.588 "data_offset": 0, 00:10:56.588 "data_size": 65536 00:10:56.588 } 00:10:56.588 ] 00:10:56.588 }' 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.588 01:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.848 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.848 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.848 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.848 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.848 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.848 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.848 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 [2024-11-17 01:31:05.336942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.108 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 [2024-11-17 01:31:05.486755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.368 [2024-11-17 01:31:05.638620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:57.368 [2024-11-17 01:31:05.638676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.368 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 BaseBdev2 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 [ 00:10:57.628 { 00:10:57.628 "name": "BaseBdev2", 00:10:57.628 "aliases": [ 00:10:57.628 "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea" 00:10:57.628 ], 00:10:57.628 "product_name": "Malloc disk", 00:10:57.628 "block_size": 512, 00:10:57.628 "num_blocks": 65536, 00:10:57.628 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:10:57.628 "assigned_rate_limits": { 00:10:57.628 "rw_ios_per_sec": 0, 00:10:57.628 "rw_mbytes_per_sec": 0, 00:10:57.628 "r_mbytes_per_sec": 0, 00:10:57.628 "w_mbytes_per_sec": 0 00:10:57.628 }, 00:10:57.628 "claimed": false, 00:10:57.628 "zoned": false, 00:10:57.628 "supported_io_types": { 00:10:57.628 "read": true, 00:10:57.628 "write": true, 00:10:57.628 "unmap": true, 00:10:57.628 "flush": true, 00:10:57.628 "reset": true, 00:10:57.628 "nvme_admin": false, 00:10:57.628 "nvme_io": false, 00:10:57.628 "nvme_io_md": false, 00:10:57.628 "write_zeroes": true, 00:10:57.628 "zcopy": true, 00:10:57.628 "get_zone_info": false, 00:10:57.628 "zone_management": false, 00:10:57.628 "zone_append": false, 00:10:57.628 "compare": false, 00:10:57.628 "compare_and_write": false, 00:10:57.628 "abort": true, 00:10:57.628 "seek_hole": false, 00:10:57.628 "seek_data": false, 00:10:57.628 "copy": true, 00:10:57.628 "nvme_iov_md": false 00:10:57.628 }, 00:10:57.628 "memory_domains": [ 00:10:57.628 { 00:10:57.628 "dma_device_id": "system", 00:10:57.628 "dma_device_type": 1 00:10:57.628 }, 00:10:57.628 { 00:10:57.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.628 "dma_device_type": 2 00:10:57.628 } 00:10:57.628 ], 00:10:57.628 "driver_specific": {} 00:10:57.628 } 00:10:57.628 ] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 BaseBdev3 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 [ 00:10:57.628 { 00:10:57.628 "name": "BaseBdev3", 00:10:57.628 "aliases": [ 00:10:57.628 "45f0450e-7453-4239-804a-92e9928c58b3" 00:10:57.628 ], 00:10:57.628 "product_name": "Malloc disk", 00:10:57.628 "block_size": 512, 00:10:57.628 "num_blocks": 65536, 00:10:57.628 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:10:57.628 "assigned_rate_limits": { 00:10:57.628 "rw_ios_per_sec": 0, 00:10:57.628 "rw_mbytes_per_sec": 0, 00:10:57.628 "r_mbytes_per_sec": 0, 00:10:57.628 "w_mbytes_per_sec": 0 00:10:57.628 }, 00:10:57.628 "claimed": false, 00:10:57.628 "zoned": false, 00:10:57.628 "supported_io_types": { 00:10:57.628 "read": true, 00:10:57.628 "write": true, 00:10:57.628 "unmap": true, 00:10:57.628 "flush": true, 00:10:57.628 "reset": true, 00:10:57.628 "nvme_admin": false, 00:10:57.628 "nvme_io": false, 00:10:57.628 "nvme_io_md": false, 00:10:57.628 "write_zeroes": true, 00:10:57.628 "zcopy": true, 00:10:57.628 "get_zone_info": false, 00:10:57.628 "zone_management": false, 00:10:57.628 "zone_append": false, 00:10:57.628 "compare": false, 00:10:57.628 "compare_and_write": false, 00:10:57.628 "abort": true, 00:10:57.628 "seek_hole": false, 00:10:57.628 "seek_data": false, 00:10:57.628 "copy": true, 00:10:57.628 "nvme_iov_md": false 00:10:57.628 }, 00:10:57.628 "memory_domains": [ 00:10:57.628 { 00:10:57.628 "dma_device_id": "system", 00:10:57.628 "dma_device_type": 1 00:10:57.628 }, 00:10:57.628 { 00:10:57.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.628 "dma_device_type": 2 00:10:57.628 } 00:10:57.628 ], 00:10:57.628 "driver_specific": {} 00:10:57.628 } 00:10:57.628 ] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 BaseBdev4 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.628 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.628 [ 00:10:57.628 { 00:10:57.628 "name": "BaseBdev4", 00:10:57.628 "aliases": [ 00:10:57.628 "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee" 00:10:57.628 ], 00:10:57.628 "product_name": "Malloc disk", 00:10:57.628 "block_size": 512, 00:10:57.628 "num_blocks": 65536, 00:10:57.628 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:10:57.628 "assigned_rate_limits": { 00:10:57.628 "rw_ios_per_sec": 0, 00:10:57.628 "rw_mbytes_per_sec": 0, 00:10:57.628 "r_mbytes_per_sec": 0, 00:10:57.628 "w_mbytes_per_sec": 0 00:10:57.628 }, 00:10:57.628 "claimed": false, 00:10:57.628 "zoned": false, 00:10:57.628 "supported_io_types": { 00:10:57.628 "read": true, 00:10:57.628 "write": true, 00:10:57.628 "unmap": true, 00:10:57.628 "flush": true, 00:10:57.628 "reset": true, 00:10:57.628 "nvme_admin": false, 00:10:57.629 "nvme_io": false, 00:10:57.629 "nvme_io_md": false, 00:10:57.629 "write_zeroes": true, 00:10:57.629 "zcopy": true, 00:10:57.629 "get_zone_info": false, 00:10:57.629 "zone_management": false, 00:10:57.629 "zone_append": false, 00:10:57.629 "compare": false, 00:10:57.629 "compare_and_write": false, 00:10:57.629 "abort": true, 00:10:57.629 "seek_hole": false, 00:10:57.629 "seek_data": false, 00:10:57.629 "copy": true, 00:10:57.629 "nvme_iov_md": false 00:10:57.629 }, 00:10:57.629 "memory_domains": [ 00:10:57.629 { 00:10:57.629 "dma_device_id": "system", 00:10:57.629 "dma_device_type": 1 00:10:57.629 }, 00:10:57.629 { 00:10:57.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.629 "dma_device_type": 2 00:10:57.629 } 00:10:57.629 ], 00:10:57.629 "driver_specific": {} 00:10:57.629 } 00:10:57.629 ] 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.629 [2024-11-17 01:31:05.993409] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.629 [2024-11-17 01:31:05.993543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.629 [2024-11-17 01:31:05.993587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.629 [2024-11-17 01:31:05.995464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.629 [2024-11-17 01:31:05.995565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.629 01:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.629 "name": "Existed_Raid", 00:10:57.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.629 "strip_size_kb": 64, 00:10:57.629 "state": "configuring", 00:10:57.629 "raid_level": "raid0", 00:10:57.629 "superblock": false, 00:10:57.629 "num_base_bdevs": 4, 00:10:57.629 "num_base_bdevs_discovered": 3, 00:10:57.629 "num_base_bdevs_operational": 4, 00:10:57.629 "base_bdevs_list": [ 00:10:57.629 { 00:10:57.629 "name": "BaseBdev1", 00:10:57.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.629 "is_configured": false, 00:10:57.629 "data_offset": 0, 00:10:57.629 "data_size": 0 00:10:57.629 }, 00:10:57.629 { 00:10:57.629 "name": "BaseBdev2", 00:10:57.629 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:10:57.629 "is_configured": true, 00:10:57.629 "data_offset": 0, 00:10:57.629 "data_size": 65536 00:10:57.629 }, 00:10:57.629 { 00:10:57.629 "name": "BaseBdev3", 00:10:57.629 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:10:57.629 "is_configured": true, 00:10:57.629 "data_offset": 0, 00:10:57.629 "data_size": 65536 00:10:57.629 }, 00:10:57.629 { 00:10:57.629 "name": "BaseBdev4", 00:10:57.629 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:10:57.629 "is_configured": true, 00:10:57.629 "data_offset": 0, 00:10:57.629 "data_size": 65536 00:10:57.629 } 00:10:57.629 ] 00:10:57.629 }' 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.629 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.198 [2024-11-17 01:31:06.404729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.198 "name": "Existed_Raid", 00:10:58.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.198 "strip_size_kb": 64, 00:10:58.198 "state": "configuring", 00:10:58.198 "raid_level": "raid0", 00:10:58.198 "superblock": false, 00:10:58.198 "num_base_bdevs": 4, 00:10:58.198 "num_base_bdevs_discovered": 2, 00:10:58.198 "num_base_bdevs_operational": 4, 00:10:58.198 "base_bdevs_list": [ 00:10:58.198 { 00:10:58.198 "name": "BaseBdev1", 00:10:58.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.198 "is_configured": false, 00:10:58.198 "data_offset": 0, 00:10:58.198 "data_size": 0 00:10:58.198 }, 00:10:58.198 { 00:10:58.198 "name": null, 00:10:58.198 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:10:58.198 "is_configured": false, 00:10:58.198 "data_offset": 0, 00:10:58.198 "data_size": 65536 00:10:58.198 }, 00:10:58.198 { 00:10:58.198 "name": "BaseBdev3", 00:10:58.198 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:10:58.198 "is_configured": true, 00:10:58.198 "data_offset": 0, 00:10:58.198 "data_size": 65536 00:10:58.198 }, 00:10:58.198 { 00:10:58.198 "name": "BaseBdev4", 00:10:58.198 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:10:58.198 "is_configured": true, 00:10:58.198 "data_offset": 0, 00:10:58.198 "data_size": 65536 00:10:58.198 } 00:10:58.198 ] 00:10:58.198 }' 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.198 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.457 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.457 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 [2024-11-17 01:31:06.892362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.458 BaseBdev1 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.716 [ 00:10:58.716 { 00:10:58.716 "name": "BaseBdev1", 00:10:58.716 "aliases": [ 00:10:58.716 "4d79a5a2-da61-4013-9ca7-02b51d61bd47" 00:10:58.716 ], 00:10:58.716 "product_name": "Malloc disk", 00:10:58.716 "block_size": 512, 00:10:58.716 "num_blocks": 65536, 00:10:58.716 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:10:58.716 "assigned_rate_limits": { 00:10:58.716 "rw_ios_per_sec": 0, 00:10:58.716 "rw_mbytes_per_sec": 0, 00:10:58.716 "r_mbytes_per_sec": 0, 00:10:58.716 "w_mbytes_per_sec": 0 00:10:58.716 }, 00:10:58.716 "claimed": true, 00:10:58.716 "claim_type": "exclusive_write", 00:10:58.716 "zoned": false, 00:10:58.716 "supported_io_types": { 00:10:58.716 "read": true, 00:10:58.716 "write": true, 00:10:58.716 "unmap": true, 00:10:58.716 "flush": true, 00:10:58.716 "reset": true, 00:10:58.716 "nvme_admin": false, 00:10:58.716 "nvme_io": false, 00:10:58.716 "nvme_io_md": false, 00:10:58.716 "write_zeroes": true, 00:10:58.716 "zcopy": true, 00:10:58.716 "get_zone_info": false, 00:10:58.716 "zone_management": false, 00:10:58.716 "zone_append": false, 00:10:58.716 "compare": false, 00:10:58.716 "compare_and_write": false, 00:10:58.716 "abort": true, 00:10:58.716 "seek_hole": false, 00:10:58.716 "seek_data": false, 00:10:58.716 "copy": true, 00:10:58.716 "nvme_iov_md": false 00:10:58.716 }, 00:10:58.716 "memory_domains": [ 00:10:58.716 { 00:10:58.716 "dma_device_id": "system", 00:10:58.716 "dma_device_type": 1 00:10:58.717 }, 00:10:58.717 { 00:10:58.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.717 "dma_device_type": 2 00:10:58.717 } 00:10:58.717 ], 00:10:58.717 "driver_specific": {} 00:10:58.717 } 00:10:58.717 ] 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.717 "name": "Existed_Raid", 00:10:58.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.717 "strip_size_kb": 64, 00:10:58.717 "state": "configuring", 00:10:58.717 "raid_level": "raid0", 00:10:58.717 "superblock": false, 00:10:58.717 "num_base_bdevs": 4, 00:10:58.717 "num_base_bdevs_discovered": 3, 00:10:58.717 "num_base_bdevs_operational": 4, 00:10:58.717 "base_bdevs_list": [ 00:10:58.717 { 00:10:58.717 "name": "BaseBdev1", 00:10:58.717 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:10:58.717 "is_configured": true, 00:10:58.717 "data_offset": 0, 00:10:58.717 "data_size": 65536 00:10:58.717 }, 00:10:58.717 { 00:10:58.717 "name": null, 00:10:58.717 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:10:58.717 "is_configured": false, 00:10:58.717 "data_offset": 0, 00:10:58.717 "data_size": 65536 00:10:58.717 }, 00:10:58.717 { 00:10:58.717 "name": "BaseBdev3", 00:10:58.717 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:10:58.717 "is_configured": true, 00:10:58.717 "data_offset": 0, 00:10:58.717 "data_size": 65536 00:10:58.717 }, 00:10:58.717 { 00:10:58.717 "name": "BaseBdev4", 00:10:58.717 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:10:58.717 "is_configured": true, 00:10:58.717 "data_offset": 0, 00:10:58.717 "data_size": 65536 00:10:58.717 } 00:10:58.717 ] 00:10:58.717 }' 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.717 01:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.976 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.976 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.976 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.976 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.976 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.235 [2024-11-17 01:31:07.447524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.235 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.236 "name": "Existed_Raid", 00:10:59.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.236 "strip_size_kb": 64, 00:10:59.236 "state": "configuring", 00:10:59.236 "raid_level": "raid0", 00:10:59.236 "superblock": false, 00:10:59.236 "num_base_bdevs": 4, 00:10:59.236 "num_base_bdevs_discovered": 2, 00:10:59.236 "num_base_bdevs_operational": 4, 00:10:59.236 "base_bdevs_list": [ 00:10:59.236 { 00:10:59.236 "name": "BaseBdev1", 00:10:59.236 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:10:59.236 "is_configured": true, 00:10:59.236 "data_offset": 0, 00:10:59.236 "data_size": 65536 00:10:59.236 }, 00:10:59.236 { 00:10:59.236 "name": null, 00:10:59.236 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:10:59.236 "is_configured": false, 00:10:59.236 "data_offset": 0, 00:10:59.236 "data_size": 65536 00:10:59.236 }, 00:10:59.236 { 00:10:59.236 "name": null, 00:10:59.236 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:10:59.236 "is_configured": false, 00:10:59.236 "data_offset": 0, 00:10:59.236 "data_size": 65536 00:10:59.236 }, 00:10:59.236 { 00:10:59.236 "name": "BaseBdev4", 00:10:59.236 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:10:59.236 "is_configured": true, 00:10:59.236 "data_offset": 0, 00:10:59.236 "data_size": 65536 00:10:59.236 } 00:10:59.236 ] 00:10:59.236 }' 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.236 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.496 [2024-11-17 01:31:07.926718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:59.496 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.497 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.756 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.756 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.756 "name": "Existed_Raid", 00:10:59.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.756 "strip_size_kb": 64, 00:10:59.756 "state": "configuring", 00:10:59.756 "raid_level": "raid0", 00:10:59.756 "superblock": false, 00:10:59.756 "num_base_bdevs": 4, 00:10:59.756 "num_base_bdevs_discovered": 3, 00:10:59.756 "num_base_bdevs_operational": 4, 00:10:59.756 "base_bdevs_list": [ 00:10:59.756 { 00:10:59.756 "name": "BaseBdev1", 00:10:59.756 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:10:59.756 "is_configured": true, 00:10:59.756 "data_offset": 0, 00:10:59.756 "data_size": 65536 00:10:59.756 }, 00:10:59.756 { 00:10:59.756 "name": null, 00:10:59.756 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:10:59.756 "is_configured": false, 00:10:59.756 "data_offset": 0, 00:10:59.756 "data_size": 65536 00:10:59.756 }, 00:10:59.756 { 00:10:59.756 "name": "BaseBdev3", 00:10:59.756 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:10:59.756 "is_configured": true, 00:10:59.756 "data_offset": 0, 00:10:59.756 "data_size": 65536 00:10:59.756 }, 00:10:59.756 { 00:10:59.756 "name": "BaseBdev4", 00:10:59.756 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:10:59.756 "is_configured": true, 00:10:59.756 "data_offset": 0, 00:10:59.756 "data_size": 65536 00:10:59.756 } 00:10:59.756 ] 00:10:59.756 }' 00:10:59.756 01:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.756 01:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.016 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.016 [2024-11-17 01:31:08.417885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.276 "name": "Existed_Raid", 00:11:00.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.276 "strip_size_kb": 64, 00:11:00.276 "state": "configuring", 00:11:00.276 "raid_level": "raid0", 00:11:00.276 "superblock": false, 00:11:00.276 "num_base_bdevs": 4, 00:11:00.276 "num_base_bdevs_discovered": 2, 00:11:00.276 "num_base_bdevs_operational": 4, 00:11:00.276 "base_bdevs_list": [ 00:11:00.276 { 00:11:00.276 "name": null, 00:11:00.276 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:11:00.276 "is_configured": false, 00:11:00.276 "data_offset": 0, 00:11:00.276 "data_size": 65536 00:11:00.276 }, 00:11:00.276 { 00:11:00.276 "name": null, 00:11:00.276 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:11:00.276 "is_configured": false, 00:11:00.276 "data_offset": 0, 00:11:00.276 "data_size": 65536 00:11:00.276 }, 00:11:00.276 { 00:11:00.276 "name": "BaseBdev3", 00:11:00.276 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:11:00.276 "is_configured": true, 00:11:00.276 "data_offset": 0, 00:11:00.276 "data_size": 65536 00:11:00.276 }, 00:11:00.276 { 00:11:00.276 "name": "BaseBdev4", 00:11:00.276 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:11:00.276 "is_configured": true, 00:11:00.276 "data_offset": 0, 00:11:00.276 "data_size": 65536 00:11:00.276 } 00:11:00.276 ] 00:11:00.276 }' 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.276 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.535 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.535 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.535 01:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.535 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.535 01:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.796 [2024-11-17 01:31:09.030565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.796 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.797 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.797 "name": "Existed_Raid", 00:11:00.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.797 "strip_size_kb": 64, 00:11:00.797 "state": "configuring", 00:11:00.797 "raid_level": "raid0", 00:11:00.797 "superblock": false, 00:11:00.797 "num_base_bdevs": 4, 00:11:00.797 "num_base_bdevs_discovered": 3, 00:11:00.797 "num_base_bdevs_operational": 4, 00:11:00.797 "base_bdevs_list": [ 00:11:00.797 { 00:11:00.797 "name": null, 00:11:00.797 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:11:00.797 "is_configured": false, 00:11:00.797 "data_offset": 0, 00:11:00.797 "data_size": 65536 00:11:00.797 }, 00:11:00.797 { 00:11:00.797 "name": "BaseBdev2", 00:11:00.797 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:11:00.797 "is_configured": true, 00:11:00.797 "data_offset": 0, 00:11:00.797 "data_size": 65536 00:11:00.797 }, 00:11:00.797 { 00:11:00.797 "name": "BaseBdev3", 00:11:00.797 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:11:00.797 "is_configured": true, 00:11:00.797 "data_offset": 0, 00:11:00.797 "data_size": 65536 00:11:00.797 }, 00:11:00.797 { 00:11:00.797 "name": "BaseBdev4", 00:11:00.797 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:11:00.797 "is_configured": true, 00:11:00.797 "data_offset": 0, 00:11:00.797 "data_size": 65536 00:11:00.797 } 00:11:00.797 ] 00:11:00.797 }' 00:11:00.797 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.797 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.058 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.058 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.058 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.058 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.058 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4d79a5a2-da61-4013-9ca7-02b51d61bd47 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.317 [2024-11-17 01:31:09.596178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.317 [2024-11-17 01:31:09.596224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:01.317 [2024-11-17 01:31:09.596231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:01.317 [2024-11-17 01:31:09.596489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:01.317 [2024-11-17 01:31:09.596627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:01.317 [2024-11-17 01:31:09.596638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:01.317 [2024-11-17 01:31:09.596881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.317 NewBaseBdev 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.317 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.317 [ 00:11:01.317 { 00:11:01.317 "name": "NewBaseBdev", 00:11:01.317 "aliases": [ 00:11:01.317 "4d79a5a2-da61-4013-9ca7-02b51d61bd47" 00:11:01.317 ], 00:11:01.318 "product_name": "Malloc disk", 00:11:01.318 "block_size": 512, 00:11:01.318 "num_blocks": 65536, 00:11:01.318 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:11:01.318 "assigned_rate_limits": { 00:11:01.318 "rw_ios_per_sec": 0, 00:11:01.318 "rw_mbytes_per_sec": 0, 00:11:01.318 "r_mbytes_per_sec": 0, 00:11:01.318 "w_mbytes_per_sec": 0 00:11:01.318 }, 00:11:01.318 "claimed": true, 00:11:01.318 "claim_type": "exclusive_write", 00:11:01.318 "zoned": false, 00:11:01.318 "supported_io_types": { 00:11:01.318 "read": true, 00:11:01.318 "write": true, 00:11:01.318 "unmap": true, 00:11:01.318 "flush": true, 00:11:01.318 "reset": true, 00:11:01.318 "nvme_admin": false, 00:11:01.318 "nvme_io": false, 00:11:01.318 "nvme_io_md": false, 00:11:01.318 "write_zeroes": true, 00:11:01.318 "zcopy": true, 00:11:01.318 "get_zone_info": false, 00:11:01.318 "zone_management": false, 00:11:01.318 "zone_append": false, 00:11:01.318 "compare": false, 00:11:01.318 "compare_and_write": false, 00:11:01.318 "abort": true, 00:11:01.318 "seek_hole": false, 00:11:01.318 "seek_data": false, 00:11:01.318 "copy": true, 00:11:01.318 "nvme_iov_md": false 00:11:01.318 }, 00:11:01.318 "memory_domains": [ 00:11:01.318 { 00:11:01.318 "dma_device_id": "system", 00:11:01.318 "dma_device_type": 1 00:11:01.318 }, 00:11:01.318 { 00:11:01.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.318 "dma_device_type": 2 00:11:01.318 } 00:11:01.318 ], 00:11:01.318 "driver_specific": {} 00:11:01.318 } 00:11:01.318 ] 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.318 "name": "Existed_Raid", 00:11:01.318 "uuid": "745b9eda-c63a-417b-9dd7-9da8e693c103", 00:11:01.318 "strip_size_kb": 64, 00:11:01.318 "state": "online", 00:11:01.318 "raid_level": "raid0", 00:11:01.318 "superblock": false, 00:11:01.318 "num_base_bdevs": 4, 00:11:01.318 "num_base_bdevs_discovered": 4, 00:11:01.318 "num_base_bdevs_operational": 4, 00:11:01.318 "base_bdevs_list": [ 00:11:01.318 { 00:11:01.318 "name": "NewBaseBdev", 00:11:01.318 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:11:01.318 "is_configured": true, 00:11:01.318 "data_offset": 0, 00:11:01.318 "data_size": 65536 00:11:01.318 }, 00:11:01.318 { 00:11:01.318 "name": "BaseBdev2", 00:11:01.318 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:11:01.318 "is_configured": true, 00:11:01.318 "data_offset": 0, 00:11:01.318 "data_size": 65536 00:11:01.318 }, 00:11:01.318 { 00:11:01.318 "name": "BaseBdev3", 00:11:01.318 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:11:01.318 "is_configured": true, 00:11:01.318 "data_offset": 0, 00:11:01.318 "data_size": 65536 00:11:01.318 }, 00:11:01.318 { 00:11:01.318 "name": "BaseBdev4", 00:11:01.318 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:11:01.318 "is_configured": true, 00:11:01.318 "data_offset": 0, 00:11:01.318 "data_size": 65536 00:11:01.318 } 00:11:01.318 ] 00:11:01.318 }' 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.318 01:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.888 [2024-11-17 01:31:10.075697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.888 "name": "Existed_Raid", 00:11:01.888 "aliases": [ 00:11:01.888 "745b9eda-c63a-417b-9dd7-9da8e693c103" 00:11:01.888 ], 00:11:01.888 "product_name": "Raid Volume", 00:11:01.888 "block_size": 512, 00:11:01.888 "num_blocks": 262144, 00:11:01.888 "uuid": "745b9eda-c63a-417b-9dd7-9da8e693c103", 00:11:01.888 "assigned_rate_limits": { 00:11:01.888 "rw_ios_per_sec": 0, 00:11:01.888 "rw_mbytes_per_sec": 0, 00:11:01.888 "r_mbytes_per_sec": 0, 00:11:01.888 "w_mbytes_per_sec": 0 00:11:01.888 }, 00:11:01.888 "claimed": false, 00:11:01.888 "zoned": false, 00:11:01.888 "supported_io_types": { 00:11:01.888 "read": true, 00:11:01.888 "write": true, 00:11:01.888 "unmap": true, 00:11:01.888 "flush": true, 00:11:01.888 "reset": true, 00:11:01.888 "nvme_admin": false, 00:11:01.888 "nvme_io": false, 00:11:01.888 "nvme_io_md": false, 00:11:01.888 "write_zeroes": true, 00:11:01.888 "zcopy": false, 00:11:01.888 "get_zone_info": false, 00:11:01.888 "zone_management": false, 00:11:01.888 "zone_append": false, 00:11:01.888 "compare": false, 00:11:01.888 "compare_and_write": false, 00:11:01.888 "abort": false, 00:11:01.888 "seek_hole": false, 00:11:01.888 "seek_data": false, 00:11:01.888 "copy": false, 00:11:01.888 "nvme_iov_md": false 00:11:01.888 }, 00:11:01.888 "memory_domains": [ 00:11:01.888 { 00:11:01.888 "dma_device_id": "system", 00:11:01.888 "dma_device_type": 1 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.888 "dma_device_type": 2 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "dma_device_id": "system", 00:11:01.888 "dma_device_type": 1 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.888 "dma_device_type": 2 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "dma_device_id": "system", 00:11:01.888 "dma_device_type": 1 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.888 "dma_device_type": 2 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "dma_device_id": "system", 00:11:01.888 "dma_device_type": 1 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.888 "dma_device_type": 2 00:11:01.888 } 00:11:01.888 ], 00:11:01.888 "driver_specific": { 00:11:01.888 "raid": { 00:11:01.888 "uuid": "745b9eda-c63a-417b-9dd7-9da8e693c103", 00:11:01.888 "strip_size_kb": 64, 00:11:01.888 "state": "online", 00:11:01.888 "raid_level": "raid0", 00:11:01.888 "superblock": false, 00:11:01.888 "num_base_bdevs": 4, 00:11:01.888 "num_base_bdevs_discovered": 4, 00:11:01.888 "num_base_bdevs_operational": 4, 00:11:01.888 "base_bdevs_list": [ 00:11:01.888 { 00:11:01.888 "name": "NewBaseBdev", 00:11:01.888 "uuid": "4d79a5a2-da61-4013-9ca7-02b51d61bd47", 00:11:01.888 "is_configured": true, 00:11:01.888 "data_offset": 0, 00:11:01.888 "data_size": 65536 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "name": "BaseBdev2", 00:11:01.888 "uuid": "92d86a10-ea61-4c3c-bdcf-2ba31a54f4ea", 00:11:01.888 "is_configured": true, 00:11:01.888 "data_offset": 0, 00:11:01.888 "data_size": 65536 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "name": "BaseBdev3", 00:11:01.888 "uuid": "45f0450e-7453-4239-804a-92e9928c58b3", 00:11:01.888 "is_configured": true, 00:11:01.888 "data_offset": 0, 00:11:01.888 "data_size": 65536 00:11:01.888 }, 00:11:01.888 { 00:11:01.888 "name": "BaseBdev4", 00:11:01.888 "uuid": "089c752e-1a9a-4d8f-af16-7cb91ab3a5ee", 00:11:01.888 "is_configured": true, 00:11:01.888 "data_offset": 0, 00:11:01.888 "data_size": 65536 00:11:01.888 } 00:11:01.888 ] 00:11:01.888 } 00:11:01.888 } 00:11:01.888 }' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:01.888 BaseBdev2 00:11:01.888 BaseBdev3 00:11:01.888 BaseBdev4' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.888 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.889 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.148 [2024-11-17 01:31:10.406837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.148 [2024-11-17 01:31:10.406921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.148 [2024-11-17 01:31:10.406995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.148 [2024-11-17 01:31:10.407068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.148 [2024-11-17 01:31:10.407078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69180 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69180 ']' 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69180 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69180 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.148 killing process with pid 69180 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69180' 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69180 00:11:02.148 [2024-11-17 01:31:10.458040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.148 01:31:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69180 00:11:02.407 [2024-11-17 01:31:10.844268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:03.788 00:11:03.788 real 0m11.205s 00:11:03.788 user 0m17.854s 00:11:03.788 sys 0m1.984s 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.788 ************************************ 00:11:03.788 END TEST raid_state_function_test 00:11:03.788 ************************************ 00:11:03.788 01:31:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:03.788 01:31:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.788 01:31:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.788 01:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.788 ************************************ 00:11:03.788 START TEST raid_state_function_test_sb 00:11:03.788 ************************************ 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.788 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69846 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69846' 00:11:03.789 Process raid pid: 69846 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69846 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69846 ']' 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.789 01:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.789 [2024-11-17 01:31:12.067309] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:03.789 [2024-11-17 01:31:12.067648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.048 [2024-11-17 01:31:12.256262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.048 [2024-11-17 01:31:12.366727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.307 [2024-11-17 01:31:12.558093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.307 [2024-11-17 01:31:12.558186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.567 [2024-11-17 01:31:12.868699] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.567 [2024-11-17 01:31:12.868753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.567 [2024-11-17 01:31:12.868776] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.567 [2024-11-17 01:31:12.868786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.567 [2024-11-17 01:31:12.868792] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.567 [2024-11-17 01:31:12.868802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.567 [2024-11-17 01:31:12.868807] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.567 [2024-11-17 01:31:12.868815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.567 "name": "Existed_Raid", 00:11:04.567 "uuid": "dddf1d91-67ac-4998-8d56-c02ac25a11d3", 00:11:04.567 "strip_size_kb": 64, 00:11:04.567 "state": "configuring", 00:11:04.567 "raid_level": "raid0", 00:11:04.567 "superblock": true, 00:11:04.567 "num_base_bdevs": 4, 00:11:04.567 "num_base_bdevs_discovered": 0, 00:11:04.567 "num_base_bdevs_operational": 4, 00:11:04.567 "base_bdevs_list": [ 00:11:04.567 { 00:11:04.567 "name": "BaseBdev1", 00:11:04.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.567 "is_configured": false, 00:11:04.567 "data_offset": 0, 00:11:04.567 "data_size": 0 00:11:04.567 }, 00:11:04.567 { 00:11:04.567 "name": "BaseBdev2", 00:11:04.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.567 "is_configured": false, 00:11:04.567 "data_offset": 0, 00:11:04.567 "data_size": 0 00:11:04.567 }, 00:11:04.567 { 00:11:04.567 "name": "BaseBdev3", 00:11:04.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.567 "is_configured": false, 00:11:04.567 "data_offset": 0, 00:11:04.567 "data_size": 0 00:11:04.567 }, 00:11:04.567 { 00:11:04.567 "name": "BaseBdev4", 00:11:04.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.567 "is_configured": false, 00:11:04.567 "data_offset": 0, 00:11:04.567 "data_size": 0 00:11:04.567 } 00:11:04.567 ] 00:11:04.567 }' 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.567 01:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.135 [2024-11-17 01:31:13.323872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.135 [2024-11-17 01:31:13.324002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.135 [2024-11-17 01:31:13.335840] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.135 [2024-11-17 01:31:13.335923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.135 [2024-11-17 01:31:13.335956] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.135 [2024-11-17 01:31:13.335979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.135 [2024-11-17 01:31:13.336002] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.135 [2024-11-17 01:31:13.336024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.135 [2024-11-17 01:31:13.336050] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.135 [2024-11-17 01:31:13.336105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.135 [2024-11-17 01:31:13.380397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.135 BaseBdev1 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.135 [ 00:11:05.135 { 00:11:05.135 "name": "BaseBdev1", 00:11:05.135 "aliases": [ 00:11:05.135 "4199e29b-d65d-4513-abc6-8821a2a89f01" 00:11:05.135 ], 00:11:05.135 "product_name": "Malloc disk", 00:11:05.135 "block_size": 512, 00:11:05.135 "num_blocks": 65536, 00:11:05.135 "uuid": "4199e29b-d65d-4513-abc6-8821a2a89f01", 00:11:05.135 "assigned_rate_limits": { 00:11:05.135 "rw_ios_per_sec": 0, 00:11:05.135 "rw_mbytes_per_sec": 0, 00:11:05.135 "r_mbytes_per_sec": 0, 00:11:05.135 "w_mbytes_per_sec": 0 00:11:05.135 }, 00:11:05.135 "claimed": true, 00:11:05.135 "claim_type": "exclusive_write", 00:11:05.135 "zoned": false, 00:11:05.135 "supported_io_types": { 00:11:05.135 "read": true, 00:11:05.135 "write": true, 00:11:05.135 "unmap": true, 00:11:05.135 "flush": true, 00:11:05.135 "reset": true, 00:11:05.135 "nvme_admin": false, 00:11:05.135 "nvme_io": false, 00:11:05.135 "nvme_io_md": false, 00:11:05.135 "write_zeroes": true, 00:11:05.135 "zcopy": true, 00:11:05.135 "get_zone_info": false, 00:11:05.135 "zone_management": false, 00:11:05.135 "zone_append": false, 00:11:05.135 "compare": false, 00:11:05.135 "compare_and_write": false, 00:11:05.135 "abort": true, 00:11:05.135 "seek_hole": false, 00:11:05.135 "seek_data": false, 00:11:05.135 "copy": true, 00:11:05.135 "nvme_iov_md": false 00:11:05.135 }, 00:11:05.135 "memory_domains": [ 00:11:05.135 { 00:11:05.135 "dma_device_id": "system", 00:11:05.135 "dma_device_type": 1 00:11:05.135 }, 00:11:05.135 { 00:11:05.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.135 "dma_device_type": 2 00:11:05.135 } 00:11:05.135 ], 00:11:05.135 "driver_specific": {} 00:11:05.135 } 00:11:05.135 ] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.135 "name": "Existed_Raid", 00:11:05.135 "uuid": "783adbcf-3c79-44ed-81bf-8a57be89ea65", 00:11:05.135 "strip_size_kb": 64, 00:11:05.135 "state": "configuring", 00:11:05.135 "raid_level": "raid0", 00:11:05.135 "superblock": true, 00:11:05.135 "num_base_bdevs": 4, 00:11:05.135 "num_base_bdevs_discovered": 1, 00:11:05.135 "num_base_bdevs_operational": 4, 00:11:05.135 "base_bdevs_list": [ 00:11:05.135 { 00:11:05.135 "name": "BaseBdev1", 00:11:05.135 "uuid": "4199e29b-d65d-4513-abc6-8821a2a89f01", 00:11:05.135 "is_configured": true, 00:11:05.135 "data_offset": 2048, 00:11:05.135 "data_size": 63488 00:11:05.135 }, 00:11:05.135 { 00:11:05.135 "name": "BaseBdev2", 00:11:05.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.135 "is_configured": false, 00:11:05.135 "data_offset": 0, 00:11:05.135 "data_size": 0 00:11:05.135 }, 00:11:05.135 { 00:11:05.135 "name": "BaseBdev3", 00:11:05.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.135 "is_configured": false, 00:11:05.135 "data_offset": 0, 00:11:05.135 "data_size": 0 00:11:05.135 }, 00:11:05.135 { 00:11:05.135 "name": "BaseBdev4", 00:11:05.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.135 "is_configured": false, 00:11:05.135 "data_offset": 0, 00:11:05.135 "data_size": 0 00:11:05.135 } 00:11:05.135 ] 00:11:05.135 }' 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.135 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.704 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.704 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.704 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.704 [2024-11-17 01:31:13.859616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.705 [2024-11-17 01:31:13.859673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.705 [2024-11-17 01:31:13.871651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.705 [2024-11-17 01:31:13.873498] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.705 [2024-11-17 01:31:13.873574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.705 [2024-11-17 01:31:13.873601] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.705 [2024-11-17 01:31:13.873625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.705 [2024-11-17 01:31:13.873643] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.705 [2024-11-17 01:31:13.873663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.705 "name": "Existed_Raid", 00:11:05.705 "uuid": "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d", 00:11:05.705 "strip_size_kb": 64, 00:11:05.705 "state": "configuring", 00:11:05.705 "raid_level": "raid0", 00:11:05.705 "superblock": true, 00:11:05.705 "num_base_bdevs": 4, 00:11:05.705 "num_base_bdevs_discovered": 1, 00:11:05.705 "num_base_bdevs_operational": 4, 00:11:05.705 "base_bdevs_list": [ 00:11:05.705 { 00:11:05.705 "name": "BaseBdev1", 00:11:05.705 "uuid": "4199e29b-d65d-4513-abc6-8821a2a89f01", 00:11:05.705 "is_configured": true, 00:11:05.705 "data_offset": 2048, 00:11:05.705 "data_size": 63488 00:11:05.705 }, 00:11:05.705 { 00:11:05.705 "name": "BaseBdev2", 00:11:05.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.705 "is_configured": false, 00:11:05.705 "data_offset": 0, 00:11:05.705 "data_size": 0 00:11:05.705 }, 00:11:05.705 { 00:11:05.705 "name": "BaseBdev3", 00:11:05.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.705 "is_configured": false, 00:11:05.705 "data_offset": 0, 00:11:05.705 "data_size": 0 00:11:05.705 }, 00:11:05.705 { 00:11:05.705 "name": "BaseBdev4", 00:11:05.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.705 "is_configured": false, 00:11:05.705 "data_offset": 0, 00:11:05.705 "data_size": 0 00:11:05.705 } 00:11:05.705 ] 00:11:05.705 }' 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.705 01:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.965 [2024-11-17 01:31:14.334550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.965 BaseBdev2 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.965 [ 00:11:05.965 { 00:11:05.965 "name": "BaseBdev2", 00:11:05.965 "aliases": [ 00:11:05.965 "c53f0f88-b528-4781-b1e8-d50107e42116" 00:11:05.965 ], 00:11:05.965 "product_name": "Malloc disk", 00:11:05.965 "block_size": 512, 00:11:05.965 "num_blocks": 65536, 00:11:05.965 "uuid": "c53f0f88-b528-4781-b1e8-d50107e42116", 00:11:05.965 "assigned_rate_limits": { 00:11:05.965 "rw_ios_per_sec": 0, 00:11:05.965 "rw_mbytes_per_sec": 0, 00:11:05.965 "r_mbytes_per_sec": 0, 00:11:05.965 "w_mbytes_per_sec": 0 00:11:05.965 }, 00:11:05.965 "claimed": true, 00:11:05.965 "claim_type": "exclusive_write", 00:11:05.965 "zoned": false, 00:11:05.965 "supported_io_types": { 00:11:05.965 "read": true, 00:11:05.965 "write": true, 00:11:05.965 "unmap": true, 00:11:05.965 "flush": true, 00:11:05.965 "reset": true, 00:11:05.965 "nvme_admin": false, 00:11:05.965 "nvme_io": false, 00:11:05.965 "nvme_io_md": false, 00:11:05.965 "write_zeroes": true, 00:11:05.965 "zcopy": true, 00:11:05.965 "get_zone_info": false, 00:11:05.965 "zone_management": false, 00:11:05.965 "zone_append": false, 00:11:05.965 "compare": false, 00:11:05.965 "compare_and_write": false, 00:11:05.965 "abort": true, 00:11:05.965 "seek_hole": false, 00:11:05.965 "seek_data": false, 00:11:05.965 "copy": true, 00:11:05.965 "nvme_iov_md": false 00:11:05.965 }, 00:11:05.965 "memory_domains": [ 00:11:05.965 { 00:11:05.965 "dma_device_id": "system", 00:11:05.965 "dma_device_type": 1 00:11:05.965 }, 00:11:05.965 { 00:11:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.965 "dma_device_type": 2 00:11:05.965 } 00:11:05.965 ], 00:11:05.965 "driver_specific": {} 00:11:05.965 } 00:11:05.965 ] 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.965 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.225 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.225 "name": "Existed_Raid", 00:11:06.225 "uuid": "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d", 00:11:06.225 "strip_size_kb": 64, 00:11:06.225 "state": "configuring", 00:11:06.225 "raid_level": "raid0", 00:11:06.225 "superblock": true, 00:11:06.225 "num_base_bdevs": 4, 00:11:06.225 "num_base_bdevs_discovered": 2, 00:11:06.225 "num_base_bdevs_operational": 4, 00:11:06.225 "base_bdevs_list": [ 00:11:06.225 { 00:11:06.225 "name": "BaseBdev1", 00:11:06.225 "uuid": "4199e29b-d65d-4513-abc6-8821a2a89f01", 00:11:06.225 "is_configured": true, 00:11:06.225 "data_offset": 2048, 00:11:06.225 "data_size": 63488 00:11:06.225 }, 00:11:06.225 { 00:11:06.225 "name": "BaseBdev2", 00:11:06.225 "uuid": "c53f0f88-b528-4781-b1e8-d50107e42116", 00:11:06.225 "is_configured": true, 00:11:06.225 "data_offset": 2048, 00:11:06.225 "data_size": 63488 00:11:06.225 }, 00:11:06.225 { 00:11:06.225 "name": "BaseBdev3", 00:11:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.225 "is_configured": false, 00:11:06.225 "data_offset": 0, 00:11:06.225 "data_size": 0 00:11:06.225 }, 00:11:06.225 { 00:11:06.225 "name": "BaseBdev4", 00:11:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.225 "is_configured": false, 00:11:06.225 "data_offset": 0, 00:11:06.225 "data_size": 0 00:11:06.225 } 00:11:06.225 ] 00:11:06.225 }' 00:11:06.225 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.225 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 [2024-11-17 01:31:14.821073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.485 BaseBdev3 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.485 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.485 [ 00:11:06.485 { 00:11:06.485 "name": "BaseBdev3", 00:11:06.485 "aliases": [ 00:11:06.486 "fdd21421-fe46-4676-a901-d829713607c7" 00:11:06.486 ], 00:11:06.486 "product_name": "Malloc disk", 00:11:06.486 "block_size": 512, 00:11:06.486 "num_blocks": 65536, 00:11:06.486 "uuid": "fdd21421-fe46-4676-a901-d829713607c7", 00:11:06.486 "assigned_rate_limits": { 00:11:06.486 "rw_ios_per_sec": 0, 00:11:06.486 "rw_mbytes_per_sec": 0, 00:11:06.486 "r_mbytes_per_sec": 0, 00:11:06.486 "w_mbytes_per_sec": 0 00:11:06.486 }, 00:11:06.486 "claimed": true, 00:11:06.486 "claim_type": "exclusive_write", 00:11:06.486 "zoned": false, 00:11:06.486 "supported_io_types": { 00:11:06.486 "read": true, 00:11:06.486 "write": true, 00:11:06.486 "unmap": true, 00:11:06.486 "flush": true, 00:11:06.486 "reset": true, 00:11:06.486 "nvme_admin": false, 00:11:06.486 "nvme_io": false, 00:11:06.486 "nvme_io_md": false, 00:11:06.486 "write_zeroes": true, 00:11:06.486 "zcopy": true, 00:11:06.486 "get_zone_info": false, 00:11:06.486 "zone_management": false, 00:11:06.486 "zone_append": false, 00:11:06.486 "compare": false, 00:11:06.486 "compare_and_write": false, 00:11:06.486 "abort": true, 00:11:06.486 "seek_hole": false, 00:11:06.486 "seek_data": false, 00:11:06.486 "copy": true, 00:11:06.486 "nvme_iov_md": false 00:11:06.486 }, 00:11:06.486 "memory_domains": [ 00:11:06.486 { 00:11:06.486 "dma_device_id": "system", 00:11:06.486 "dma_device_type": 1 00:11:06.486 }, 00:11:06.486 { 00:11:06.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.486 "dma_device_type": 2 00:11:06.486 } 00:11:06.486 ], 00:11:06.486 "driver_specific": {} 00:11:06.486 } 00:11:06.486 ] 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.486 "name": "Existed_Raid", 00:11:06.486 "uuid": "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d", 00:11:06.486 "strip_size_kb": 64, 00:11:06.486 "state": "configuring", 00:11:06.486 "raid_level": "raid0", 00:11:06.486 "superblock": true, 00:11:06.486 "num_base_bdevs": 4, 00:11:06.486 "num_base_bdevs_discovered": 3, 00:11:06.486 "num_base_bdevs_operational": 4, 00:11:06.486 "base_bdevs_list": [ 00:11:06.486 { 00:11:06.486 "name": "BaseBdev1", 00:11:06.486 "uuid": "4199e29b-d65d-4513-abc6-8821a2a89f01", 00:11:06.486 "is_configured": true, 00:11:06.486 "data_offset": 2048, 00:11:06.486 "data_size": 63488 00:11:06.486 }, 00:11:06.486 { 00:11:06.486 "name": "BaseBdev2", 00:11:06.486 "uuid": "c53f0f88-b528-4781-b1e8-d50107e42116", 00:11:06.486 "is_configured": true, 00:11:06.486 "data_offset": 2048, 00:11:06.486 "data_size": 63488 00:11:06.486 }, 00:11:06.486 { 00:11:06.486 "name": "BaseBdev3", 00:11:06.486 "uuid": "fdd21421-fe46-4676-a901-d829713607c7", 00:11:06.486 "is_configured": true, 00:11:06.486 "data_offset": 2048, 00:11:06.486 "data_size": 63488 00:11:06.486 }, 00:11:06.486 { 00:11:06.486 "name": "BaseBdev4", 00:11:06.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.486 "is_configured": false, 00:11:06.486 "data_offset": 0, 00:11:06.486 "data_size": 0 00:11:06.486 } 00:11:06.486 ] 00:11:06.486 }' 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.486 01:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.055 [2024-11-17 01:31:15.326420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.055 [2024-11-17 01:31:15.326684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.055 [2024-11-17 01:31:15.326700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.055 [2024-11-17 01:31:15.326991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:07.055 [2024-11-17 01:31:15.327166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.055 [2024-11-17 01:31:15.327181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:07.055 BaseBdev4 00:11:07.055 [2024-11-17 01:31:15.327318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.055 [ 00:11:07.055 { 00:11:07.055 "name": "BaseBdev4", 00:11:07.055 "aliases": [ 00:11:07.055 "4f132c4e-530d-47a2-8d1d-bf85a8bbc397" 00:11:07.055 ], 00:11:07.055 "product_name": "Malloc disk", 00:11:07.055 "block_size": 512, 00:11:07.055 "num_blocks": 65536, 00:11:07.055 "uuid": "4f132c4e-530d-47a2-8d1d-bf85a8bbc397", 00:11:07.055 "assigned_rate_limits": { 00:11:07.055 "rw_ios_per_sec": 0, 00:11:07.055 "rw_mbytes_per_sec": 0, 00:11:07.055 "r_mbytes_per_sec": 0, 00:11:07.055 "w_mbytes_per_sec": 0 00:11:07.055 }, 00:11:07.055 "claimed": true, 00:11:07.055 "claim_type": "exclusive_write", 00:11:07.055 "zoned": false, 00:11:07.055 "supported_io_types": { 00:11:07.055 "read": true, 00:11:07.055 "write": true, 00:11:07.055 "unmap": true, 00:11:07.055 "flush": true, 00:11:07.055 "reset": true, 00:11:07.055 "nvme_admin": false, 00:11:07.055 "nvme_io": false, 00:11:07.055 "nvme_io_md": false, 00:11:07.055 "write_zeroes": true, 00:11:07.055 "zcopy": true, 00:11:07.055 "get_zone_info": false, 00:11:07.055 "zone_management": false, 00:11:07.055 "zone_append": false, 00:11:07.055 "compare": false, 00:11:07.055 "compare_and_write": false, 00:11:07.055 "abort": true, 00:11:07.055 "seek_hole": false, 00:11:07.055 "seek_data": false, 00:11:07.055 "copy": true, 00:11:07.055 "nvme_iov_md": false 00:11:07.055 }, 00:11:07.055 "memory_domains": [ 00:11:07.055 { 00:11:07.055 "dma_device_id": "system", 00:11:07.055 "dma_device_type": 1 00:11:07.055 }, 00:11:07.055 { 00:11:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.055 "dma_device_type": 2 00:11:07.055 } 00:11:07.055 ], 00:11:07.055 "driver_specific": {} 00:11:07.055 } 00:11:07.055 ] 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.055 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.055 "name": "Existed_Raid", 00:11:07.055 "uuid": "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d", 00:11:07.055 "strip_size_kb": 64, 00:11:07.055 "state": "online", 00:11:07.055 "raid_level": "raid0", 00:11:07.055 "superblock": true, 00:11:07.055 "num_base_bdevs": 4, 00:11:07.055 "num_base_bdevs_discovered": 4, 00:11:07.055 "num_base_bdevs_operational": 4, 00:11:07.055 "base_bdevs_list": [ 00:11:07.055 { 00:11:07.055 "name": "BaseBdev1", 00:11:07.055 "uuid": "4199e29b-d65d-4513-abc6-8821a2a89f01", 00:11:07.055 "is_configured": true, 00:11:07.055 "data_offset": 2048, 00:11:07.055 "data_size": 63488 00:11:07.055 }, 00:11:07.055 { 00:11:07.055 "name": "BaseBdev2", 00:11:07.055 "uuid": "c53f0f88-b528-4781-b1e8-d50107e42116", 00:11:07.055 "is_configured": true, 00:11:07.055 "data_offset": 2048, 00:11:07.055 "data_size": 63488 00:11:07.055 }, 00:11:07.055 { 00:11:07.055 "name": "BaseBdev3", 00:11:07.055 "uuid": "fdd21421-fe46-4676-a901-d829713607c7", 00:11:07.055 "is_configured": true, 00:11:07.055 "data_offset": 2048, 00:11:07.055 "data_size": 63488 00:11:07.055 }, 00:11:07.055 { 00:11:07.055 "name": "BaseBdev4", 00:11:07.055 "uuid": "4f132c4e-530d-47a2-8d1d-bf85a8bbc397", 00:11:07.055 "is_configured": true, 00:11:07.055 "data_offset": 2048, 00:11:07.055 "data_size": 63488 00:11:07.055 } 00:11:07.056 ] 00:11:07.056 }' 00:11:07.056 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.056 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.625 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.625 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.625 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.625 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.625 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.625 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.625 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.626 [2024-11-17 01:31:15.802013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.626 "name": "Existed_Raid", 00:11:07.626 "aliases": [ 00:11:07.626 "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d" 00:11:07.626 ], 00:11:07.626 "product_name": "Raid Volume", 00:11:07.626 "block_size": 512, 00:11:07.626 "num_blocks": 253952, 00:11:07.626 "uuid": "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d", 00:11:07.626 "assigned_rate_limits": { 00:11:07.626 "rw_ios_per_sec": 0, 00:11:07.626 "rw_mbytes_per_sec": 0, 00:11:07.626 "r_mbytes_per_sec": 0, 00:11:07.626 "w_mbytes_per_sec": 0 00:11:07.626 }, 00:11:07.626 "claimed": false, 00:11:07.626 "zoned": false, 00:11:07.626 "supported_io_types": { 00:11:07.626 "read": true, 00:11:07.626 "write": true, 00:11:07.626 "unmap": true, 00:11:07.626 "flush": true, 00:11:07.626 "reset": true, 00:11:07.626 "nvme_admin": false, 00:11:07.626 "nvme_io": false, 00:11:07.626 "nvme_io_md": false, 00:11:07.626 "write_zeroes": true, 00:11:07.626 "zcopy": false, 00:11:07.626 "get_zone_info": false, 00:11:07.626 "zone_management": false, 00:11:07.626 "zone_append": false, 00:11:07.626 "compare": false, 00:11:07.626 "compare_and_write": false, 00:11:07.626 "abort": false, 00:11:07.626 "seek_hole": false, 00:11:07.626 "seek_data": false, 00:11:07.626 "copy": false, 00:11:07.626 "nvme_iov_md": false 00:11:07.626 }, 00:11:07.626 "memory_domains": [ 00:11:07.626 { 00:11:07.626 "dma_device_id": "system", 00:11:07.626 "dma_device_type": 1 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.626 "dma_device_type": 2 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "dma_device_id": "system", 00:11:07.626 "dma_device_type": 1 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.626 "dma_device_type": 2 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "dma_device_id": "system", 00:11:07.626 "dma_device_type": 1 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.626 "dma_device_type": 2 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "dma_device_id": "system", 00:11:07.626 "dma_device_type": 1 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.626 "dma_device_type": 2 00:11:07.626 } 00:11:07.626 ], 00:11:07.626 "driver_specific": { 00:11:07.626 "raid": { 00:11:07.626 "uuid": "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d", 00:11:07.626 "strip_size_kb": 64, 00:11:07.626 "state": "online", 00:11:07.626 "raid_level": "raid0", 00:11:07.626 "superblock": true, 00:11:07.626 "num_base_bdevs": 4, 00:11:07.626 "num_base_bdevs_discovered": 4, 00:11:07.626 "num_base_bdevs_operational": 4, 00:11:07.626 "base_bdevs_list": [ 00:11:07.626 { 00:11:07.626 "name": "BaseBdev1", 00:11:07.626 "uuid": "4199e29b-d65d-4513-abc6-8821a2a89f01", 00:11:07.626 "is_configured": true, 00:11:07.626 "data_offset": 2048, 00:11:07.626 "data_size": 63488 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "name": "BaseBdev2", 00:11:07.626 "uuid": "c53f0f88-b528-4781-b1e8-d50107e42116", 00:11:07.626 "is_configured": true, 00:11:07.626 "data_offset": 2048, 00:11:07.626 "data_size": 63488 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "name": "BaseBdev3", 00:11:07.626 "uuid": "fdd21421-fe46-4676-a901-d829713607c7", 00:11:07.626 "is_configured": true, 00:11:07.626 "data_offset": 2048, 00:11:07.626 "data_size": 63488 00:11:07.626 }, 00:11:07.626 { 00:11:07.626 "name": "BaseBdev4", 00:11:07.626 "uuid": "4f132c4e-530d-47a2-8d1d-bf85a8bbc397", 00:11:07.626 "is_configured": true, 00:11:07.626 "data_offset": 2048, 00:11:07.626 "data_size": 63488 00:11:07.626 } 00:11:07.626 ] 00:11:07.626 } 00:11:07.626 } 00:11:07.626 }' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.626 BaseBdev2 00:11:07.626 BaseBdev3 00:11:07.626 BaseBdev4' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.626 01:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.626 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.886 [2024-11-17 01:31:16.085221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.887 [2024-11-17 01:31:16.085252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.887 [2024-11-17 01:31:16.085300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.887 "name": "Existed_Raid", 00:11:07.887 "uuid": "924d9b6d-7b8f-4d95-9a1f-87d8ecc3ef2d", 00:11:07.887 "strip_size_kb": 64, 00:11:07.887 "state": "offline", 00:11:07.887 "raid_level": "raid0", 00:11:07.887 "superblock": true, 00:11:07.887 "num_base_bdevs": 4, 00:11:07.887 "num_base_bdevs_discovered": 3, 00:11:07.887 "num_base_bdevs_operational": 3, 00:11:07.887 "base_bdevs_list": [ 00:11:07.887 { 00:11:07.887 "name": null, 00:11:07.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.887 "is_configured": false, 00:11:07.887 "data_offset": 0, 00:11:07.887 "data_size": 63488 00:11:07.887 }, 00:11:07.887 { 00:11:07.887 "name": "BaseBdev2", 00:11:07.887 "uuid": "c53f0f88-b528-4781-b1e8-d50107e42116", 00:11:07.887 "is_configured": true, 00:11:07.887 "data_offset": 2048, 00:11:07.887 "data_size": 63488 00:11:07.887 }, 00:11:07.887 { 00:11:07.887 "name": "BaseBdev3", 00:11:07.887 "uuid": "fdd21421-fe46-4676-a901-d829713607c7", 00:11:07.887 "is_configured": true, 00:11:07.887 "data_offset": 2048, 00:11:07.887 "data_size": 63488 00:11:07.887 }, 00:11:07.887 { 00:11:07.887 "name": "BaseBdev4", 00:11:07.887 "uuid": "4f132c4e-530d-47a2-8d1d-bf85a8bbc397", 00:11:07.887 "is_configured": true, 00:11:07.887 "data_offset": 2048, 00:11:07.887 "data_size": 63488 00:11:07.887 } 00:11:07.887 ] 00:11:07.887 }' 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.887 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.147 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.147 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.147 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.147 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.147 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.147 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.407 [2024-11-17 01:31:16.631987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.407 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.407 [2024-11-17 01:31:16.780801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.667 01:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.667 [2024-11-17 01:31:16.930828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:08.667 [2024-11-17 01:31:16.930941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:08.667 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.667 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.667 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.667 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.667 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.668 BaseBdev2 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.668 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.949 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.949 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.949 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.949 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.949 [ 00:11:08.949 { 00:11:08.949 "name": "BaseBdev2", 00:11:08.949 "aliases": [ 00:11:08.949 "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad" 00:11:08.949 ], 00:11:08.949 "product_name": "Malloc disk", 00:11:08.949 "block_size": 512, 00:11:08.949 "num_blocks": 65536, 00:11:08.949 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:08.949 "assigned_rate_limits": { 00:11:08.949 "rw_ios_per_sec": 0, 00:11:08.949 "rw_mbytes_per_sec": 0, 00:11:08.949 "r_mbytes_per_sec": 0, 00:11:08.949 "w_mbytes_per_sec": 0 00:11:08.949 }, 00:11:08.949 "claimed": false, 00:11:08.949 "zoned": false, 00:11:08.949 "supported_io_types": { 00:11:08.949 "read": true, 00:11:08.949 "write": true, 00:11:08.949 "unmap": true, 00:11:08.949 "flush": true, 00:11:08.949 "reset": true, 00:11:08.949 "nvme_admin": false, 00:11:08.949 "nvme_io": false, 00:11:08.949 "nvme_io_md": false, 00:11:08.949 "write_zeroes": true, 00:11:08.949 "zcopy": true, 00:11:08.949 "get_zone_info": false, 00:11:08.949 "zone_management": false, 00:11:08.949 "zone_append": false, 00:11:08.949 "compare": false, 00:11:08.949 "compare_and_write": false, 00:11:08.949 "abort": true, 00:11:08.949 "seek_hole": false, 00:11:08.950 "seek_data": false, 00:11:08.950 "copy": true, 00:11:08.950 "nvme_iov_md": false 00:11:08.950 }, 00:11:08.950 "memory_domains": [ 00:11:08.950 { 00:11:08.950 "dma_device_id": "system", 00:11:08.950 "dma_device_type": 1 00:11:08.950 }, 00:11:08.950 { 00:11:08.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.950 "dma_device_type": 2 00:11:08.950 } 00:11:08.950 ], 00:11:08.950 "driver_specific": {} 00:11:08.950 } 00:11:08.950 ] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 BaseBdev3 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 [ 00:11:08.950 { 00:11:08.950 "name": "BaseBdev3", 00:11:08.950 "aliases": [ 00:11:08.950 "3dec22cb-1365-4ec4-ae64-fe81f331e90b" 00:11:08.950 ], 00:11:08.950 "product_name": "Malloc disk", 00:11:08.950 "block_size": 512, 00:11:08.950 "num_blocks": 65536, 00:11:08.950 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:08.950 "assigned_rate_limits": { 00:11:08.950 "rw_ios_per_sec": 0, 00:11:08.950 "rw_mbytes_per_sec": 0, 00:11:08.950 "r_mbytes_per_sec": 0, 00:11:08.950 "w_mbytes_per_sec": 0 00:11:08.950 }, 00:11:08.950 "claimed": false, 00:11:08.950 "zoned": false, 00:11:08.950 "supported_io_types": { 00:11:08.950 "read": true, 00:11:08.950 "write": true, 00:11:08.950 "unmap": true, 00:11:08.950 "flush": true, 00:11:08.950 "reset": true, 00:11:08.950 "nvme_admin": false, 00:11:08.950 "nvme_io": false, 00:11:08.950 "nvme_io_md": false, 00:11:08.950 "write_zeroes": true, 00:11:08.950 "zcopy": true, 00:11:08.950 "get_zone_info": false, 00:11:08.950 "zone_management": false, 00:11:08.950 "zone_append": false, 00:11:08.950 "compare": false, 00:11:08.950 "compare_and_write": false, 00:11:08.950 "abort": true, 00:11:08.950 "seek_hole": false, 00:11:08.950 "seek_data": false, 00:11:08.950 "copy": true, 00:11:08.950 "nvme_iov_md": false 00:11:08.950 }, 00:11:08.950 "memory_domains": [ 00:11:08.950 { 00:11:08.950 "dma_device_id": "system", 00:11:08.950 "dma_device_type": 1 00:11:08.950 }, 00:11:08.950 { 00:11:08.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.950 "dma_device_type": 2 00:11:08.950 } 00:11:08.950 ], 00:11:08.950 "driver_specific": {} 00:11:08.950 } 00:11:08.950 ] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 BaseBdev4 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.950 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.950 [ 00:11:08.950 { 00:11:08.950 "name": "BaseBdev4", 00:11:08.950 "aliases": [ 00:11:08.950 "8f0a887a-90c5-4681-9a0a-7ead7b66b67e" 00:11:08.950 ], 00:11:08.950 "product_name": "Malloc disk", 00:11:08.950 "block_size": 512, 00:11:08.950 "num_blocks": 65536, 00:11:08.950 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:08.950 "assigned_rate_limits": { 00:11:08.950 "rw_ios_per_sec": 0, 00:11:08.950 "rw_mbytes_per_sec": 0, 00:11:08.950 "r_mbytes_per_sec": 0, 00:11:08.950 "w_mbytes_per_sec": 0 00:11:08.950 }, 00:11:08.950 "claimed": false, 00:11:08.950 "zoned": false, 00:11:08.950 "supported_io_types": { 00:11:08.950 "read": true, 00:11:08.950 "write": true, 00:11:08.950 "unmap": true, 00:11:08.950 "flush": true, 00:11:08.950 "reset": true, 00:11:08.950 "nvme_admin": false, 00:11:08.950 "nvme_io": false, 00:11:08.950 "nvme_io_md": false, 00:11:08.950 "write_zeroes": true, 00:11:08.950 "zcopy": true, 00:11:08.950 "get_zone_info": false, 00:11:08.950 "zone_management": false, 00:11:08.950 "zone_append": false, 00:11:08.950 "compare": false, 00:11:08.950 "compare_and_write": false, 00:11:08.950 "abort": true, 00:11:08.950 "seek_hole": false, 00:11:08.951 "seek_data": false, 00:11:08.951 "copy": true, 00:11:08.951 "nvme_iov_md": false 00:11:08.951 }, 00:11:08.951 "memory_domains": [ 00:11:08.951 { 00:11:08.951 "dma_device_id": "system", 00:11:08.951 "dma_device_type": 1 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.951 "dma_device_type": 2 00:11:08.951 } 00:11:08.951 ], 00:11:08.951 "driver_specific": {} 00:11:08.951 } 00:11:08.951 ] 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.951 [2024-11-17 01:31:17.316506] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.951 [2024-11-17 01:31:17.316645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.951 [2024-11-17 01:31:17.316689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.951 [2024-11-17 01:31:17.318602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.951 [2024-11-17 01:31:17.318697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.951 "name": "Existed_Raid", 00:11:08.951 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:08.951 "strip_size_kb": 64, 00:11:08.951 "state": "configuring", 00:11:08.951 "raid_level": "raid0", 00:11:08.951 "superblock": true, 00:11:08.951 "num_base_bdevs": 4, 00:11:08.951 "num_base_bdevs_discovered": 3, 00:11:08.951 "num_base_bdevs_operational": 4, 00:11:08.951 "base_bdevs_list": [ 00:11:08.951 { 00:11:08.951 "name": "BaseBdev1", 00:11:08.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.951 "is_configured": false, 00:11:08.951 "data_offset": 0, 00:11:08.951 "data_size": 0 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "BaseBdev2", 00:11:08.951 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "BaseBdev3", 00:11:08.951 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 }, 00:11:08.951 { 00:11:08.951 "name": "BaseBdev4", 00:11:08.951 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:08.951 "is_configured": true, 00:11:08.951 "data_offset": 2048, 00:11:08.951 "data_size": 63488 00:11:08.951 } 00:11:08.951 ] 00:11:08.951 }' 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.951 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.532 [2024-11-17 01:31:17.751747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.532 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.533 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.533 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.533 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.533 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.533 "name": "Existed_Raid", 00:11:09.533 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:09.533 "strip_size_kb": 64, 00:11:09.533 "state": "configuring", 00:11:09.533 "raid_level": "raid0", 00:11:09.533 "superblock": true, 00:11:09.533 "num_base_bdevs": 4, 00:11:09.533 "num_base_bdevs_discovered": 2, 00:11:09.533 "num_base_bdevs_operational": 4, 00:11:09.533 "base_bdevs_list": [ 00:11:09.533 { 00:11:09.533 "name": "BaseBdev1", 00:11:09.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.533 "is_configured": false, 00:11:09.533 "data_offset": 0, 00:11:09.533 "data_size": 0 00:11:09.533 }, 00:11:09.533 { 00:11:09.533 "name": null, 00:11:09.533 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:09.533 "is_configured": false, 00:11:09.533 "data_offset": 0, 00:11:09.533 "data_size": 63488 00:11:09.533 }, 00:11:09.533 { 00:11:09.533 "name": "BaseBdev3", 00:11:09.533 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:09.533 "is_configured": true, 00:11:09.533 "data_offset": 2048, 00:11:09.533 "data_size": 63488 00:11:09.533 }, 00:11:09.533 { 00:11:09.533 "name": "BaseBdev4", 00:11:09.533 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:09.533 "is_configured": true, 00:11:09.533 "data_offset": 2048, 00:11:09.533 "data_size": 63488 00:11:09.533 } 00:11:09.533 ] 00:11:09.533 }' 00:11:09.533 01:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.533 01:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.793 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.053 [2024-11-17 01:31:18.258345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.053 BaseBdev1 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.053 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.053 [ 00:11:10.053 { 00:11:10.053 "name": "BaseBdev1", 00:11:10.053 "aliases": [ 00:11:10.053 "98161ea2-a2a7-4a4d-bd87-2ead793a52f0" 00:11:10.053 ], 00:11:10.053 "product_name": "Malloc disk", 00:11:10.054 "block_size": 512, 00:11:10.054 "num_blocks": 65536, 00:11:10.054 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:10.054 "assigned_rate_limits": { 00:11:10.054 "rw_ios_per_sec": 0, 00:11:10.054 "rw_mbytes_per_sec": 0, 00:11:10.054 "r_mbytes_per_sec": 0, 00:11:10.054 "w_mbytes_per_sec": 0 00:11:10.054 }, 00:11:10.054 "claimed": true, 00:11:10.054 "claim_type": "exclusive_write", 00:11:10.054 "zoned": false, 00:11:10.054 "supported_io_types": { 00:11:10.054 "read": true, 00:11:10.054 "write": true, 00:11:10.054 "unmap": true, 00:11:10.054 "flush": true, 00:11:10.054 "reset": true, 00:11:10.054 "nvme_admin": false, 00:11:10.054 "nvme_io": false, 00:11:10.054 "nvme_io_md": false, 00:11:10.054 "write_zeroes": true, 00:11:10.054 "zcopy": true, 00:11:10.054 "get_zone_info": false, 00:11:10.054 "zone_management": false, 00:11:10.054 "zone_append": false, 00:11:10.054 "compare": false, 00:11:10.054 "compare_and_write": false, 00:11:10.054 "abort": true, 00:11:10.054 "seek_hole": false, 00:11:10.054 "seek_data": false, 00:11:10.054 "copy": true, 00:11:10.054 "nvme_iov_md": false 00:11:10.054 }, 00:11:10.054 "memory_domains": [ 00:11:10.054 { 00:11:10.054 "dma_device_id": "system", 00:11:10.054 "dma_device_type": 1 00:11:10.054 }, 00:11:10.054 { 00:11:10.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.054 "dma_device_type": 2 00:11:10.054 } 00:11:10.054 ], 00:11:10.054 "driver_specific": {} 00:11:10.054 } 00:11:10.054 ] 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.054 "name": "Existed_Raid", 00:11:10.054 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:10.054 "strip_size_kb": 64, 00:11:10.054 "state": "configuring", 00:11:10.054 "raid_level": "raid0", 00:11:10.054 "superblock": true, 00:11:10.054 "num_base_bdevs": 4, 00:11:10.054 "num_base_bdevs_discovered": 3, 00:11:10.054 "num_base_bdevs_operational": 4, 00:11:10.054 "base_bdevs_list": [ 00:11:10.054 { 00:11:10.054 "name": "BaseBdev1", 00:11:10.054 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:10.054 "is_configured": true, 00:11:10.054 "data_offset": 2048, 00:11:10.054 "data_size": 63488 00:11:10.054 }, 00:11:10.054 { 00:11:10.054 "name": null, 00:11:10.054 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:10.054 "is_configured": false, 00:11:10.054 "data_offset": 0, 00:11:10.054 "data_size": 63488 00:11:10.054 }, 00:11:10.054 { 00:11:10.054 "name": "BaseBdev3", 00:11:10.054 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:10.054 "is_configured": true, 00:11:10.054 "data_offset": 2048, 00:11:10.054 "data_size": 63488 00:11:10.054 }, 00:11:10.054 { 00:11:10.054 "name": "BaseBdev4", 00:11:10.054 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:10.054 "is_configured": true, 00:11:10.054 "data_offset": 2048, 00:11:10.054 "data_size": 63488 00:11:10.054 } 00:11:10.054 ] 00:11:10.054 }' 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.054 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.314 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.574 [2024-11-17 01:31:18.773506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.574 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.575 "name": "Existed_Raid", 00:11:10.575 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:10.575 "strip_size_kb": 64, 00:11:10.575 "state": "configuring", 00:11:10.575 "raid_level": "raid0", 00:11:10.575 "superblock": true, 00:11:10.575 "num_base_bdevs": 4, 00:11:10.575 "num_base_bdevs_discovered": 2, 00:11:10.575 "num_base_bdevs_operational": 4, 00:11:10.575 "base_bdevs_list": [ 00:11:10.575 { 00:11:10.575 "name": "BaseBdev1", 00:11:10.575 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:10.575 "is_configured": true, 00:11:10.575 "data_offset": 2048, 00:11:10.575 "data_size": 63488 00:11:10.575 }, 00:11:10.575 { 00:11:10.575 "name": null, 00:11:10.575 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:10.575 "is_configured": false, 00:11:10.575 "data_offset": 0, 00:11:10.575 "data_size": 63488 00:11:10.575 }, 00:11:10.575 { 00:11:10.575 "name": null, 00:11:10.575 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:10.575 "is_configured": false, 00:11:10.575 "data_offset": 0, 00:11:10.575 "data_size": 63488 00:11:10.575 }, 00:11:10.575 { 00:11:10.575 "name": "BaseBdev4", 00:11:10.575 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:10.575 "is_configured": true, 00:11:10.575 "data_offset": 2048, 00:11:10.575 "data_size": 63488 00:11:10.575 } 00:11:10.575 ] 00:11:10.575 }' 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.575 01:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.835 [2024-11-17 01:31:19.284639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.835 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.094 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.094 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.094 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.094 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.094 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.094 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.094 "name": "Existed_Raid", 00:11:11.094 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:11.094 "strip_size_kb": 64, 00:11:11.095 "state": "configuring", 00:11:11.095 "raid_level": "raid0", 00:11:11.095 "superblock": true, 00:11:11.095 "num_base_bdevs": 4, 00:11:11.095 "num_base_bdevs_discovered": 3, 00:11:11.095 "num_base_bdevs_operational": 4, 00:11:11.095 "base_bdevs_list": [ 00:11:11.095 { 00:11:11.095 "name": "BaseBdev1", 00:11:11.095 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:11.095 "is_configured": true, 00:11:11.095 "data_offset": 2048, 00:11:11.095 "data_size": 63488 00:11:11.095 }, 00:11:11.095 { 00:11:11.095 "name": null, 00:11:11.095 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:11.095 "is_configured": false, 00:11:11.095 "data_offset": 0, 00:11:11.095 "data_size": 63488 00:11:11.095 }, 00:11:11.095 { 00:11:11.095 "name": "BaseBdev3", 00:11:11.095 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:11.095 "is_configured": true, 00:11:11.095 "data_offset": 2048, 00:11:11.095 "data_size": 63488 00:11:11.095 }, 00:11:11.095 { 00:11:11.095 "name": "BaseBdev4", 00:11:11.095 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:11.095 "is_configured": true, 00:11:11.095 "data_offset": 2048, 00:11:11.095 "data_size": 63488 00:11:11.095 } 00:11:11.095 ] 00:11:11.095 }' 00:11:11.095 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.095 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.354 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.354 [2024-11-17 01:31:19.751902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.614 "name": "Existed_Raid", 00:11:11.614 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:11.614 "strip_size_kb": 64, 00:11:11.614 "state": "configuring", 00:11:11.614 "raid_level": "raid0", 00:11:11.614 "superblock": true, 00:11:11.614 "num_base_bdevs": 4, 00:11:11.614 "num_base_bdevs_discovered": 2, 00:11:11.614 "num_base_bdevs_operational": 4, 00:11:11.614 "base_bdevs_list": [ 00:11:11.614 { 00:11:11.614 "name": null, 00:11:11.614 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:11.614 "is_configured": false, 00:11:11.614 "data_offset": 0, 00:11:11.614 "data_size": 63488 00:11:11.614 }, 00:11:11.614 { 00:11:11.614 "name": null, 00:11:11.614 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:11.614 "is_configured": false, 00:11:11.614 "data_offset": 0, 00:11:11.614 "data_size": 63488 00:11:11.614 }, 00:11:11.614 { 00:11:11.614 "name": "BaseBdev3", 00:11:11.614 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:11.614 "is_configured": true, 00:11:11.614 "data_offset": 2048, 00:11:11.614 "data_size": 63488 00:11:11.614 }, 00:11:11.614 { 00:11:11.614 "name": "BaseBdev4", 00:11:11.614 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:11.614 "is_configured": true, 00:11:11.614 "data_offset": 2048, 00:11:11.614 "data_size": 63488 00:11:11.614 } 00:11:11.614 ] 00:11:11.614 }' 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.614 01:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.873 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.874 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.874 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.874 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.874 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.134 [2024-11-17 01:31:20.348938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.134 "name": "Existed_Raid", 00:11:12.134 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:12.134 "strip_size_kb": 64, 00:11:12.134 "state": "configuring", 00:11:12.134 "raid_level": "raid0", 00:11:12.134 "superblock": true, 00:11:12.134 "num_base_bdevs": 4, 00:11:12.134 "num_base_bdevs_discovered": 3, 00:11:12.134 "num_base_bdevs_operational": 4, 00:11:12.134 "base_bdevs_list": [ 00:11:12.134 { 00:11:12.134 "name": null, 00:11:12.134 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:12.134 "is_configured": false, 00:11:12.134 "data_offset": 0, 00:11:12.134 "data_size": 63488 00:11:12.134 }, 00:11:12.134 { 00:11:12.134 "name": "BaseBdev2", 00:11:12.134 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:12.134 "is_configured": true, 00:11:12.134 "data_offset": 2048, 00:11:12.134 "data_size": 63488 00:11:12.134 }, 00:11:12.134 { 00:11:12.134 "name": "BaseBdev3", 00:11:12.134 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:12.134 "is_configured": true, 00:11:12.134 "data_offset": 2048, 00:11:12.134 "data_size": 63488 00:11:12.134 }, 00:11:12.134 { 00:11:12.134 "name": "BaseBdev4", 00:11:12.134 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:12.134 "is_configured": true, 00:11:12.134 "data_offset": 2048, 00:11:12.134 "data_size": 63488 00:11:12.134 } 00:11:12.134 ] 00:11:12.134 }' 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.134 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:12.394 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.654 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.654 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 98161ea2-a2a7-4a4d-bd87-2ead793a52f0 00:11:12.654 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.654 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.654 [2024-11-17 01:31:20.927395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:12.654 [2024-11-17 01:31:20.927617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.654 [2024-11-17 01:31:20.927630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.654 [2024-11-17 01:31:20.927935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.654 NewBaseBdev 00:11:12.654 [2024-11-17 01:31:20.928080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.655 [2024-11-17 01:31:20.928093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:12.655 [2024-11-17 01:31:20.928218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.655 [ 00:11:12.655 { 00:11:12.655 "name": "NewBaseBdev", 00:11:12.655 "aliases": [ 00:11:12.655 "98161ea2-a2a7-4a4d-bd87-2ead793a52f0" 00:11:12.655 ], 00:11:12.655 "product_name": "Malloc disk", 00:11:12.655 "block_size": 512, 00:11:12.655 "num_blocks": 65536, 00:11:12.655 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:12.655 "assigned_rate_limits": { 00:11:12.655 "rw_ios_per_sec": 0, 00:11:12.655 "rw_mbytes_per_sec": 0, 00:11:12.655 "r_mbytes_per_sec": 0, 00:11:12.655 "w_mbytes_per_sec": 0 00:11:12.655 }, 00:11:12.655 "claimed": true, 00:11:12.655 "claim_type": "exclusive_write", 00:11:12.655 "zoned": false, 00:11:12.655 "supported_io_types": { 00:11:12.655 "read": true, 00:11:12.655 "write": true, 00:11:12.655 "unmap": true, 00:11:12.655 "flush": true, 00:11:12.655 "reset": true, 00:11:12.655 "nvme_admin": false, 00:11:12.655 "nvme_io": false, 00:11:12.655 "nvme_io_md": false, 00:11:12.655 "write_zeroes": true, 00:11:12.655 "zcopy": true, 00:11:12.655 "get_zone_info": false, 00:11:12.655 "zone_management": false, 00:11:12.655 "zone_append": false, 00:11:12.655 "compare": false, 00:11:12.655 "compare_and_write": false, 00:11:12.655 "abort": true, 00:11:12.655 "seek_hole": false, 00:11:12.655 "seek_data": false, 00:11:12.655 "copy": true, 00:11:12.655 "nvme_iov_md": false 00:11:12.655 }, 00:11:12.655 "memory_domains": [ 00:11:12.655 { 00:11:12.655 "dma_device_id": "system", 00:11:12.655 "dma_device_type": 1 00:11:12.655 }, 00:11:12.655 { 00:11:12.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.655 "dma_device_type": 2 00:11:12.655 } 00:11:12.655 ], 00:11:12.655 "driver_specific": {} 00:11:12.655 } 00:11:12.655 ] 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.655 01:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.655 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.655 "name": "Existed_Raid", 00:11:12.655 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:12.655 "strip_size_kb": 64, 00:11:12.655 "state": "online", 00:11:12.655 "raid_level": "raid0", 00:11:12.655 "superblock": true, 00:11:12.655 "num_base_bdevs": 4, 00:11:12.655 "num_base_bdevs_discovered": 4, 00:11:12.655 "num_base_bdevs_operational": 4, 00:11:12.655 "base_bdevs_list": [ 00:11:12.655 { 00:11:12.655 "name": "NewBaseBdev", 00:11:12.655 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:12.655 "is_configured": true, 00:11:12.655 "data_offset": 2048, 00:11:12.655 "data_size": 63488 00:11:12.655 }, 00:11:12.655 { 00:11:12.655 "name": "BaseBdev2", 00:11:12.655 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:12.655 "is_configured": true, 00:11:12.655 "data_offset": 2048, 00:11:12.655 "data_size": 63488 00:11:12.655 }, 00:11:12.655 { 00:11:12.655 "name": "BaseBdev3", 00:11:12.655 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:12.655 "is_configured": true, 00:11:12.655 "data_offset": 2048, 00:11:12.655 "data_size": 63488 00:11:12.655 }, 00:11:12.655 { 00:11:12.655 "name": "BaseBdev4", 00:11:12.655 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:12.655 "is_configured": true, 00:11:12.655 "data_offset": 2048, 00:11:12.655 "data_size": 63488 00:11:12.655 } 00:11:12.655 ] 00:11:12.655 }' 00:11:12.655 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.655 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.225 [2024-11-17 01:31:21.446919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.225 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.225 "name": "Existed_Raid", 00:11:13.225 "aliases": [ 00:11:13.225 "33032a44-eda4-4640-b4e5-a4942e2b074d" 00:11:13.225 ], 00:11:13.225 "product_name": "Raid Volume", 00:11:13.225 "block_size": 512, 00:11:13.225 "num_blocks": 253952, 00:11:13.225 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:13.225 "assigned_rate_limits": { 00:11:13.225 "rw_ios_per_sec": 0, 00:11:13.225 "rw_mbytes_per_sec": 0, 00:11:13.225 "r_mbytes_per_sec": 0, 00:11:13.225 "w_mbytes_per_sec": 0 00:11:13.225 }, 00:11:13.225 "claimed": false, 00:11:13.225 "zoned": false, 00:11:13.225 "supported_io_types": { 00:11:13.225 "read": true, 00:11:13.226 "write": true, 00:11:13.226 "unmap": true, 00:11:13.226 "flush": true, 00:11:13.226 "reset": true, 00:11:13.226 "nvme_admin": false, 00:11:13.226 "nvme_io": false, 00:11:13.226 "nvme_io_md": false, 00:11:13.226 "write_zeroes": true, 00:11:13.226 "zcopy": false, 00:11:13.226 "get_zone_info": false, 00:11:13.226 "zone_management": false, 00:11:13.226 "zone_append": false, 00:11:13.226 "compare": false, 00:11:13.226 "compare_and_write": false, 00:11:13.226 "abort": false, 00:11:13.226 "seek_hole": false, 00:11:13.226 "seek_data": false, 00:11:13.226 "copy": false, 00:11:13.226 "nvme_iov_md": false 00:11:13.226 }, 00:11:13.226 "memory_domains": [ 00:11:13.226 { 00:11:13.226 "dma_device_id": "system", 00:11:13.226 "dma_device_type": 1 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.226 "dma_device_type": 2 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "dma_device_id": "system", 00:11:13.226 "dma_device_type": 1 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.226 "dma_device_type": 2 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "dma_device_id": "system", 00:11:13.226 "dma_device_type": 1 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.226 "dma_device_type": 2 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "dma_device_id": "system", 00:11:13.226 "dma_device_type": 1 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.226 "dma_device_type": 2 00:11:13.226 } 00:11:13.226 ], 00:11:13.226 "driver_specific": { 00:11:13.226 "raid": { 00:11:13.226 "uuid": "33032a44-eda4-4640-b4e5-a4942e2b074d", 00:11:13.226 "strip_size_kb": 64, 00:11:13.226 "state": "online", 00:11:13.226 "raid_level": "raid0", 00:11:13.226 "superblock": true, 00:11:13.226 "num_base_bdevs": 4, 00:11:13.226 "num_base_bdevs_discovered": 4, 00:11:13.226 "num_base_bdevs_operational": 4, 00:11:13.226 "base_bdevs_list": [ 00:11:13.226 { 00:11:13.226 "name": "NewBaseBdev", 00:11:13.226 "uuid": "98161ea2-a2a7-4a4d-bd87-2ead793a52f0", 00:11:13.226 "is_configured": true, 00:11:13.226 "data_offset": 2048, 00:11:13.226 "data_size": 63488 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "name": "BaseBdev2", 00:11:13.226 "uuid": "b2f6af38-ffd1-434a-8c7d-dda2c6c522ad", 00:11:13.226 "is_configured": true, 00:11:13.226 "data_offset": 2048, 00:11:13.226 "data_size": 63488 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "name": "BaseBdev3", 00:11:13.226 "uuid": "3dec22cb-1365-4ec4-ae64-fe81f331e90b", 00:11:13.226 "is_configured": true, 00:11:13.226 "data_offset": 2048, 00:11:13.226 "data_size": 63488 00:11:13.226 }, 00:11:13.226 { 00:11:13.226 "name": "BaseBdev4", 00:11:13.226 "uuid": "8f0a887a-90c5-4681-9a0a-7ead7b66b67e", 00:11:13.226 "is_configured": true, 00:11:13.226 "data_offset": 2048, 00:11:13.226 "data_size": 63488 00:11:13.226 } 00:11:13.226 ] 00:11:13.226 } 00:11:13.226 } 00:11:13.226 }' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:13.226 BaseBdev2 00:11:13.226 BaseBdev3 00:11:13.226 BaseBdev4' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.226 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.486 [2024-11-17 01:31:21.762016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.486 [2024-11-17 01:31:21.762047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.486 [2024-11-17 01:31:21.762115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.486 [2024-11-17 01:31:21.762178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.486 [2024-11-17 01:31:21.762188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69846 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69846 ']' 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69846 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69846 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69846' 00:11:13.486 killing process with pid 69846 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69846 00:11:13.486 [2024-11-17 01:31:21.808968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.486 01:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69846 00:11:13.746 [2024-11-17 01:31:22.181604] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.127 01:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:15.127 00:11:15.127 real 0m11.288s 00:11:15.127 user 0m17.852s 00:11:15.127 sys 0m2.051s 00:11:15.127 01:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.127 01:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.127 ************************************ 00:11:15.127 END TEST raid_state_function_test_sb 00:11:15.127 ************************************ 00:11:15.127 01:31:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:15.127 01:31:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:15.127 01:31:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.127 01:31:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.127 ************************************ 00:11:15.127 START TEST raid_superblock_test 00:11:15.127 ************************************ 00:11:15.127 01:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:15.127 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:15.127 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:15.127 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:15.127 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:15.127 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:15.127 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70511 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70511 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70511 ']' 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.128 01:31:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.128 [2024-11-17 01:31:23.403140] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:15.128 [2024-11-17 01:31:23.403240] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:11:15.128 [2024-11-17 01:31:23.573264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.388 [2024-11-17 01:31:23.680272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.648 [2024-11-17 01:31:23.867954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.648 [2024-11-17 01:31:23.868015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.908 malloc1 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.908 [2024-11-17 01:31:24.293150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.908 [2024-11-17 01:31:24.293314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.908 [2024-11-17 01:31:24.293356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:15.908 [2024-11-17 01:31:24.293386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.908 [2024-11-17 01:31:24.295438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.908 [2024-11-17 01:31:24.295528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.908 pt1 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.908 malloc2 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.908 [2024-11-17 01:31:24.347582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.908 [2024-11-17 01:31:24.347638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.908 [2024-11-17 01:31:24.347659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:15.908 [2024-11-17 01:31:24.347668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.908 [2024-11-17 01:31:24.349632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.908 [2024-11-17 01:31:24.349670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.908 pt2 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.908 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.168 malloc3 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.168 [2024-11-17 01:31:24.409103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:16.168 [2024-11-17 01:31:24.409225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.168 [2024-11-17 01:31:24.409278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:16.168 [2024-11-17 01:31:24.409306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.168 [2024-11-17 01:31:24.411270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.168 [2024-11-17 01:31:24.411345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:16.168 pt3 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:16.168 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.169 malloc4 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.169 [2024-11-17 01:31:24.464247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:16.169 [2024-11-17 01:31:24.464358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.169 [2024-11-17 01:31:24.464408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:16.169 [2024-11-17 01:31:24.464436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.169 [2024-11-17 01:31:24.466427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.169 [2024-11-17 01:31:24.466497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:16.169 pt4 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.169 [2024-11-17 01:31:24.476259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:16.169 [2024-11-17 01:31:24.478054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.169 [2024-11-17 01:31:24.478156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:16.169 [2024-11-17 01:31:24.478251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:16.169 [2024-11-17 01:31:24.478449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:16.169 [2024-11-17 01:31:24.478495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.169 [2024-11-17 01:31:24.478749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.169 [2024-11-17 01:31:24.478957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:16.169 [2024-11-17 01:31:24.479002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:16.169 [2024-11-17 01:31:24.479180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.169 "name": "raid_bdev1", 00:11:16.169 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:16.169 "strip_size_kb": 64, 00:11:16.169 "state": "online", 00:11:16.169 "raid_level": "raid0", 00:11:16.169 "superblock": true, 00:11:16.169 "num_base_bdevs": 4, 00:11:16.169 "num_base_bdevs_discovered": 4, 00:11:16.169 "num_base_bdevs_operational": 4, 00:11:16.169 "base_bdevs_list": [ 00:11:16.169 { 00:11:16.169 "name": "pt1", 00:11:16.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.169 "is_configured": true, 00:11:16.169 "data_offset": 2048, 00:11:16.169 "data_size": 63488 00:11:16.169 }, 00:11:16.169 { 00:11:16.169 "name": "pt2", 00:11:16.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.169 "is_configured": true, 00:11:16.169 "data_offset": 2048, 00:11:16.169 "data_size": 63488 00:11:16.169 }, 00:11:16.169 { 00:11:16.169 "name": "pt3", 00:11:16.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.169 "is_configured": true, 00:11:16.169 "data_offset": 2048, 00:11:16.169 "data_size": 63488 00:11:16.169 }, 00:11:16.169 { 00:11:16.169 "name": "pt4", 00:11:16.169 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.169 "is_configured": true, 00:11:16.169 "data_offset": 2048, 00:11:16.169 "data_size": 63488 00:11:16.169 } 00:11:16.169 ] 00:11:16.169 }' 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.169 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.739 [2024-11-17 01:31:24.903823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.739 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.739 "name": "raid_bdev1", 00:11:16.739 "aliases": [ 00:11:16.739 "3c944c71-afa9-4477-9c5a-d3bb5020799c" 00:11:16.739 ], 00:11:16.739 "product_name": "Raid Volume", 00:11:16.739 "block_size": 512, 00:11:16.739 "num_blocks": 253952, 00:11:16.739 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:16.739 "assigned_rate_limits": { 00:11:16.739 "rw_ios_per_sec": 0, 00:11:16.739 "rw_mbytes_per_sec": 0, 00:11:16.739 "r_mbytes_per_sec": 0, 00:11:16.739 "w_mbytes_per_sec": 0 00:11:16.739 }, 00:11:16.739 "claimed": false, 00:11:16.739 "zoned": false, 00:11:16.739 "supported_io_types": { 00:11:16.739 "read": true, 00:11:16.739 "write": true, 00:11:16.739 "unmap": true, 00:11:16.739 "flush": true, 00:11:16.739 "reset": true, 00:11:16.739 "nvme_admin": false, 00:11:16.739 "nvme_io": false, 00:11:16.739 "nvme_io_md": false, 00:11:16.739 "write_zeroes": true, 00:11:16.739 "zcopy": false, 00:11:16.739 "get_zone_info": false, 00:11:16.739 "zone_management": false, 00:11:16.739 "zone_append": false, 00:11:16.739 "compare": false, 00:11:16.739 "compare_and_write": false, 00:11:16.739 "abort": false, 00:11:16.739 "seek_hole": false, 00:11:16.739 "seek_data": false, 00:11:16.739 "copy": false, 00:11:16.739 "nvme_iov_md": false 00:11:16.739 }, 00:11:16.739 "memory_domains": [ 00:11:16.739 { 00:11:16.739 "dma_device_id": "system", 00:11:16.739 "dma_device_type": 1 00:11:16.739 }, 00:11:16.739 { 00:11:16.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.739 "dma_device_type": 2 00:11:16.739 }, 00:11:16.739 { 00:11:16.739 "dma_device_id": "system", 00:11:16.739 "dma_device_type": 1 00:11:16.739 }, 00:11:16.739 { 00:11:16.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.739 "dma_device_type": 2 00:11:16.739 }, 00:11:16.739 { 00:11:16.739 "dma_device_id": "system", 00:11:16.739 "dma_device_type": 1 00:11:16.740 }, 00:11:16.740 { 00:11:16.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.740 "dma_device_type": 2 00:11:16.740 }, 00:11:16.740 { 00:11:16.740 "dma_device_id": "system", 00:11:16.740 "dma_device_type": 1 00:11:16.740 }, 00:11:16.740 { 00:11:16.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.740 "dma_device_type": 2 00:11:16.740 } 00:11:16.740 ], 00:11:16.740 "driver_specific": { 00:11:16.740 "raid": { 00:11:16.740 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:16.740 "strip_size_kb": 64, 00:11:16.740 "state": "online", 00:11:16.740 "raid_level": "raid0", 00:11:16.740 "superblock": true, 00:11:16.740 "num_base_bdevs": 4, 00:11:16.740 "num_base_bdevs_discovered": 4, 00:11:16.740 "num_base_bdevs_operational": 4, 00:11:16.740 "base_bdevs_list": [ 00:11:16.740 { 00:11:16.740 "name": "pt1", 00:11:16.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.740 "is_configured": true, 00:11:16.740 "data_offset": 2048, 00:11:16.740 "data_size": 63488 00:11:16.740 }, 00:11:16.740 { 00:11:16.740 "name": "pt2", 00:11:16.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.740 "is_configured": true, 00:11:16.740 "data_offset": 2048, 00:11:16.740 "data_size": 63488 00:11:16.740 }, 00:11:16.740 { 00:11:16.740 "name": "pt3", 00:11:16.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.740 "is_configured": true, 00:11:16.740 "data_offset": 2048, 00:11:16.740 "data_size": 63488 00:11:16.740 }, 00:11:16.740 { 00:11:16.740 "name": "pt4", 00:11:16.740 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.740 "is_configured": true, 00:11:16.740 "data_offset": 2048, 00:11:16.740 "data_size": 63488 00:11:16.740 } 00:11:16.740 ] 00:11:16.740 } 00:11:16.740 } 00:11:16.740 }' 00:11:16.740 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.740 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:16.740 pt2 00:11:16.740 pt3 00:11:16.740 pt4' 00:11:16.740 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.740 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.740 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.740 01:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.740 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.001 [2024-11-17 01:31:25.203235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c944c71-afa9-4477-9c5a-d3bb5020799c 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c944c71-afa9-4477-9c5a-d3bb5020799c ']' 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.001 [2024-11-17 01:31:25.238903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.001 [2024-11-17 01:31:25.238973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.001 [2024-11-17 01:31:25.239070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.001 [2024-11-17 01:31:25.239153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.001 [2024-11-17 01:31:25.239205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.001 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 [2024-11-17 01:31:25.386689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:17.002 [2024-11-17 01:31:25.388608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:17.002 [2024-11-17 01:31:25.388714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:17.002 [2024-11-17 01:31:25.388750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:17.002 [2024-11-17 01:31:25.388812] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:17.002 [2024-11-17 01:31:25.388856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:17.002 [2024-11-17 01:31:25.388874] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:17.002 [2024-11-17 01:31:25.388892] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:17.002 [2024-11-17 01:31:25.388905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.002 [2024-11-17 01:31:25.388918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:17.002 request: 00:11:17.002 { 00:11:17.002 "name": "raid_bdev1", 00:11:17.002 "raid_level": "raid0", 00:11:17.002 "base_bdevs": [ 00:11:17.002 "malloc1", 00:11:17.002 "malloc2", 00:11:17.002 "malloc3", 00:11:17.002 "malloc4" 00:11:17.002 ], 00:11:17.002 "strip_size_kb": 64, 00:11:17.002 "superblock": false, 00:11:17.002 "method": "bdev_raid_create", 00:11:17.002 "req_id": 1 00:11:17.002 } 00:11:17.002 Got JSON-RPC error response 00:11:17.002 response: 00:11:17.002 { 00:11:17.002 "code": -17, 00:11:17.002 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:17.002 } 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.002 [2024-11-17 01:31:25.450550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.002 [2024-11-17 01:31:25.450648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.002 [2024-11-17 01:31:25.450695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.002 [2024-11-17 01:31:25.450724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.002 [2024-11-17 01:31:25.452756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.002 [2024-11-17 01:31:25.452859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.002 [2024-11-17 01:31:25.452950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:17.002 [2024-11-17 01:31:25.453032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.002 pt1 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.002 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.262 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.262 "name": "raid_bdev1", 00:11:17.262 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:17.262 "strip_size_kb": 64, 00:11:17.262 "state": "configuring", 00:11:17.262 "raid_level": "raid0", 00:11:17.262 "superblock": true, 00:11:17.262 "num_base_bdevs": 4, 00:11:17.262 "num_base_bdevs_discovered": 1, 00:11:17.262 "num_base_bdevs_operational": 4, 00:11:17.262 "base_bdevs_list": [ 00:11:17.262 { 00:11:17.262 "name": "pt1", 00:11:17.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.262 "is_configured": true, 00:11:17.262 "data_offset": 2048, 00:11:17.262 "data_size": 63488 00:11:17.262 }, 00:11:17.262 { 00:11:17.262 "name": null, 00:11:17.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.263 "is_configured": false, 00:11:17.263 "data_offset": 2048, 00:11:17.263 "data_size": 63488 00:11:17.263 }, 00:11:17.263 { 00:11:17.263 "name": null, 00:11:17.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.263 "is_configured": false, 00:11:17.263 "data_offset": 2048, 00:11:17.263 "data_size": 63488 00:11:17.263 }, 00:11:17.263 { 00:11:17.263 "name": null, 00:11:17.263 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.263 "is_configured": false, 00:11:17.263 "data_offset": 2048, 00:11:17.263 "data_size": 63488 00:11:17.263 } 00:11:17.263 ] 00:11:17.263 }' 00:11:17.263 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.263 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.522 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:17.522 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.522 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.522 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.522 [2024-11-17 01:31:25.885831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.522 [2024-11-17 01:31:25.885974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.522 [2024-11-17 01:31:25.886011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:17.522 [2024-11-17 01:31:25.886024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.522 [2024-11-17 01:31:25.886431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.522 [2024-11-17 01:31:25.886451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.522 [2024-11-17 01:31:25.886526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:17.522 [2024-11-17 01:31:25.886548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.522 pt2 00:11:17.522 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.522 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:17.522 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.523 [2024-11-17 01:31:25.897815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.523 "name": "raid_bdev1", 00:11:17.523 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:17.523 "strip_size_kb": 64, 00:11:17.523 "state": "configuring", 00:11:17.523 "raid_level": "raid0", 00:11:17.523 "superblock": true, 00:11:17.523 "num_base_bdevs": 4, 00:11:17.523 "num_base_bdevs_discovered": 1, 00:11:17.523 "num_base_bdevs_operational": 4, 00:11:17.523 "base_bdevs_list": [ 00:11:17.523 { 00:11:17.523 "name": "pt1", 00:11:17.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.523 "is_configured": true, 00:11:17.523 "data_offset": 2048, 00:11:17.523 "data_size": 63488 00:11:17.523 }, 00:11:17.523 { 00:11:17.523 "name": null, 00:11:17.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.523 "is_configured": false, 00:11:17.523 "data_offset": 0, 00:11:17.523 "data_size": 63488 00:11:17.523 }, 00:11:17.523 { 00:11:17.523 "name": null, 00:11:17.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.523 "is_configured": false, 00:11:17.523 "data_offset": 2048, 00:11:17.523 "data_size": 63488 00:11:17.523 }, 00:11:17.523 { 00:11:17.523 "name": null, 00:11:17.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.523 "is_configured": false, 00:11:17.523 "data_offset": 2048, 00:11:17.523 "data_size": 63488 00:11:17.523 } 00:11:17.523 ] 00:11:17.523 }' 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.523 01:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.093 [2024-11-17 01:31:26.333045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.093 [2024-11-17 01:31:26.333198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.093 [2024-11-17 01:31:26.333222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:18.093 [2024-11-17 01:31:26.333231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.093 [2024-11-17 01:31:26.333659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.093 [2024-11-17 01:31:26.333678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.093 [2024-11-17 01:31:26.333774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.093 [2024-11-17 01:31:26.333797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.093 pt2 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.093 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.094 [2024-11-17 01:31:26.344998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:18.094 [2024-11-17 01:31:26.345049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.094 [2024-11-17 01:31:26.345088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:18.094 [2024-11-17 01:31:26.345098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.094 [2024-11-17 01:31:26.345445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.094 [2024-11-17 01:31:26.345471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:18.094 [2024-11-17 01:31:26.345526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:18.094 [2024-11-17 01:31:26.345542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:18.094 pt3 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.094 [2024-11-17 01:31:26.356953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:18.094 [2024-11-17 01:31:26.357005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.094 [2024-11-17 01:31:26.357022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:18.094 [2024-11-17 01:31:26.357029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.094 [2024-11-17 01:31:26.357359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.094 [2024-11-17 01:31:26.357373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:18.094 [2024-11-17 01:31:26.357428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:18.094 [2024-11-17 01:31:26.357445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:18.094 [2024-11-17 01:31:26.357564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.094 [2024-11-17 01:31:26.357572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:18.094 [2024-11-17 01:31:26.357831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:18.094 [2024-11-17 01:31:26.357969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.094 [2024-11-17 01:31:26.357981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:18.094 [2024-11-17 01:31:26.358104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.094 pt4 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.094 "name": "raid_bdev1", 00:11:18.094 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:18.094 "strip_size_kb": 64, 00:11:18.094 "state": "online", 00:11:18.094 "raid_level": "raid0", 00:11:18.094 "superblock": true, 00:11:18.094 "num_base_bdevs": 4, 00:11:18.094 "num_base_bdevs_discovered": 4, 00:11:18.094 "num_base_bdevs_operational": 4, 00:11:18.094 "base_bdevs_list": [ 00:11:18.094 { 00:11:18.094 "name": "pt1", 00:11:18.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.094 "is_configured": true, 00:11:18.094 "data_offset": 2048, 00:11:18.094 "data_size": 63488 00:11:18.094 }, 00:11:18.094 { 00:11:18.094 "name": "pt2", 00:11:18.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.094 "is_configured": true, 00:11:18.094 "data_offset": 2048, 00:11:18.094 "data_size": 63488 00:11:18.094 }, 00:11:18.094 { 00:11:18.094 "name": "pt3", 00:11:18.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.094 "is_configured": true, 00:11:18.094 "data_offset": 2048, 00:11:18.094 "data_size": 63488 00:11:18.094 }, 00:11:18.094 { 00:11:18.094 "name": "pt4", 00:11:18.094 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.094 "is_configured": true, 00:11:18.094 "data_offset": 2048, 00:11:18.094 "data_size": 63488 00:11:18.094 } 00:11:18.094 ] 00:11:18.094 }' 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.094 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.354 [2024-11-17 01:31:26.780568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.354 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.614 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.614 "name": "raid_bdev1", 00:11:18.614 "aliases": [ 00:11:18.614 "3c944c71-afa9-4477-9c5a-d3bb5020799c" 00:11:18.614 ], 00:11:18.614 "product_name": "Raid Volume", 00:11:18.614 "block_size": 512, 00:11:18.614 "num_blocks": 253952, 00:11:18.614 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:18.614 "assigned_rate_limits": { 00:11:18.614 "rw_ios_per_sec": 0, 00:11:18.614 "rw_mbytes_per_sec": 0, 00:11:18.614 "r_mbytes_per_sec": 0, 00:11:18.614 "w_mbytes_per_sec": 0 00:11:18.614 }, 00:11:18.614 "claimed": false, 00:11:18.614 "zoned": false, 00:11:18.614 "supported_io_types": { 00:11:18.614 "read": true, 00:11:18.614 "write": true, 00:11:18.614 "unmap": true, 00:11:18.614 "flush": true, 00:11:18.614 "reset": true, 00:11:18.614 "nvme_admin": false, 00:11:18.614 "nvme_io": false, 00:11:18.614 "nvme_io_md": false, 00:11:18.614 "write_zeroes": true, 00:11:18.614 "zcopy": false, 00:11:18.614 "get_zone_info": false, 00:11:18.614 "zone_management": false, 00:11:18.614 "zone_append": false, 00:11:18.614 "compare": false, 00:11:18.614 "compare_and_write": false, 00:11:18.614 "abort": false, 00:11:18.614 "seek_hole": false, 00:11:18.614 "seek_data": false, 00:11:18.614 "copy": false, 00:11:18.614 "nvme_iov_md": false 00:11:18.614 }, 00:11:18.614 "memory_domains": [ 00:11:18.614 { 00:11:18.614 "dma_device_id": "system", 00:11:18.614 "dma_device_type": 1 00:11:18.614 }, 00:11:18.614 { 00:11:18.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.614 "dma_device_type": 2 00:11:18.614 }, 00:11:18.614 { 00:11:18.614 "dma_device_id": "system", 00:11:18.614 "dma_device_type": 1 00:11:18.614 }, 00:11:18.614 { 00:11:18.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.614 "dma_device_type": 2 00:11:18.614 }, 00:11:18.614 { 00:11:18.614 "dma_device_id": "system", 00:11:18.614 "dma_device_type": 1 00:11:18.614 }, 00:11:18.614 { 00:11:18.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.614 "dma_device_type": 2 00:11:18.614 }, 00:11:18.615 { 00:11:18.615 "dma_device_id": "system", 00:11:18.615 "dma_device_type": 1 00:11:18.615 }, 00:11:18.615 { 00:11:18.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.615 "dma_device_type": 2 00:11:18.615 } 00:11:18.615 ], 00:11:18.615 "driver_specific": { 00:11:18.615 "raid": { 00:11:18.615 "uuid": "3c944c71-afa9-4477-9c5a-d3bb5020799c", 00:11:18.615 "strip_size_kb": 64, 00:11:18.615 "state": "online", 00:11:18.615 "raid_level": "raid0", 00:11:18.615 "superblock": true, 00:11:18.615 "num_base_bdevs": 4, 00:11:18.615 "num_base_bdevs_discovered": 4, 00:11:18.615 "num_base_bdevs_operational": 4, 00:11:18.615 "base_bdevs_list": [ 00:11:18.615 { 00:11:18.615 "name": "pt1", 00:11:18.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.615 "is_configured": true, 00:11:18.615 "data_offset": 2048, 00:11:18.615 "data_size": 63488 00:11:18.615 }, 00:11:18.615 { 00:11:18.615 "name": "pt2", 00:11:18.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.615 "is_configured": true, 00:11:18.615 "data_offset": 2048, 00:11:18.615 "data_size": 63488 00:11:18.615 }, 00:11:18.615 { 00:11:18.615 "name": "pt3", 00:11:18.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.615 "is_configured": true, 00:11:18.615 "data_offset": 2048, 00:11:18.615 "data_size": 63488 00:11:18.615 }, 00:11:18.615 { 00:11:18.615 "name": "pt4", 00:11:18.615 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.615 "is_configured": true, 00:11:18.615 "data_offset": 2048, 00:11:18.615 "data_size": 63488 00:11:18.615 } 00:11:18.615 ] 00:11:18.615 } 00:11:18.615 } 00:11:18.615 }' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:18.615 pt2 00:11:18.615 pt3 00:11:18.615 pt4' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.615 01:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.615 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:18.875 [2024-11-17 01:31:27.088054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c944c71-afa9-4477-9c5a-d3bb5020799c '!=' 3c944c71-afa9-4477-9c5a-d3bb5020799c ']' 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70511 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70511 ']' 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70511 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70511 00:11:18.875 killing process with pid 70511 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70511' 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70511 00:11:18.875 [2024-11-17 01:31:27.153504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.875 [2024-11-17 01:31:27.153588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.875 [2024-11-17 01:31:27.153658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.875 [2024-11-17 01:31:27.153667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:18.875 01:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70511 00:11:19.135 [2024-11-17 01:31:27.541118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.517 01:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:20.517 00:11:20.517 real 0m5.295s 00:11:20.517 user 0m7.472s 00:11:20.517 sys 0m0.928s 00:11:20.517 01:31:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.517 ************************************ 00:11:20.517 END TEST raid_superblock_test 00:11:20.517 ************************************ 00:11:20.517 01:31:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.517 01:31:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:20.517 01:31:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:20.517 01:31:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.517 01:31:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.517 ************************************ 00:11:20.517 START TEST raid_read_error_test 00:11:20.517 ************************************ 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hku2yCYWup 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70770 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70770 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:20.517 01:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70770 ']' 00:11:20.518 01:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.518 01:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.518 01:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.518 01:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.518 01:31:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.518 [2024-11-17 01:31:28.774566] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:20.518 [2024-11-17 01:31:28.774784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70770 ] 00:11:20.518 [2024-11-17 01:31:28.922393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.777 [2024-11-17 01:31:29.033354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.777 [2024-11-17 01:31:29.220460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.777 [2024-11-17 01:31:29.220564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 BaseBdev1_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 true 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 [2024-11-17 01:31:29.629161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:21.347 [2024-11-17 01:31:29.629226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.347 [2024-11-17 01:31:29.629244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:21.347 [2024-11-17 01:31:29.629254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.347 [2024-11-17 01:31:29.631283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.347 [2024-11-17 01:31:29.631377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.347 BaseBdev1 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 BaseBdev2_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 true 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 [2024-11-17 01:31:29.693292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:21.347 [2024-11-17 01:31:29.693405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.347 [2024-11-17 01:31:29.693439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:21.347 [2024-11-17 01:31:29.693449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.347 [2024-11-17 01:31:29.695427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.347 [2024-11-17 01:31:29.695471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:21.347 BaseBdev2 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 BaseBdev3_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 true 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.347 [2024-11-17 01:31:29.770109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:21.347 [2024-11-17 01:31:29.770163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.347 [2024-11-17 01:31:29.770179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:21.347 [2024-11-17 01:31:29.770188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.347 [2024-11-17 01:31:29.772199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.347 [2024-11-17 01:31:29.772240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:21.347 BaseBdev3 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.347 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.608 BaseBdev4_malloc 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.608 true 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.608 [2024-11-17 01:31:29.832738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:21.608 [2024-11-17 01:31:29.832809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.608 [2024-11-17 01:31:29.832826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:21.608 [2024-11-17 01:31:29.832835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.608 [2024-11-17 01:31:29.834926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.608 [2024-11-17 01:31:29.835002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:21.608 BaseBdev4 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.608 [2024-11-17 01:31:29.844802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.608 [2024-11-17 01:31:29.846580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.608 [2024-11-17 01:31:29.846690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.608 [2024-11-17 01:31:29.846801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:21.608 [2024-11-17 01:31:29.847070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:21.608 [2024-11-17 01:31:29.847122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:21.608 [2024-11-17 01:31:29.847362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:21.608 [2024-11-17 01:31:29.847542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:21.608 [2024-11-17 01:31:29.847585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:21.608 [2024-11-17 01:31:29.847776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.608 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.608 "name": "raid_bdev1", 00:11:21.608 "uuid": "385c281b-e247-4875-b5a5-9260f8ba34f0", 00:11:21.608 "strip_size_kb": 64, 00:11:21.608 "state": "online", 00:11:21.608 "raid_level": "raid0", 00:11:21.608 "superblock": true, 00:11:21.608 "num_base_bdevs": 4, 00:11:21.608 "num_base_bdevs_discovered": 4, 00:11:21.608 "num_base_bdevs_operational": 4, 00:11:21.608 "base_bdevs_list": [ 00:11:21.608 { 00:11:21.608 "name": "BaseBdev1", 00:11:21.608 "uuid": "6814c2c4-3a62-53ab-a0d1-d089e50e1e9a", 00:11:21.608 "is_configured": true, 00:11:21.608 "data_offset": 2048, 00:11:21.608 "data_size": 63488 00:11:21.608 }, 00:11:21.608 { 00:11:21.608 "name": "BaseBdev2", 00:11:21.609 "uuid": "b2f6bd01-f086-51b7-ae15-583af89db005", 00:11:21.609 "is_configured": true, 00:11:21.609 "data_offset": 2048, 00:11:21.609 "data_size": 63488 00:11:21.609 }, 00:11:21.609 { 00:11:21.609 "name": "BaseBdev3", 00:11:21.609 "uuid": "12204b9c-a493-5758-99d9-2ac0b9034683", 00:11:21.609 "is_configured": true, 00:11:21.609 "data_offset": 2048, 00:11:21.609 "data_size": 63488 00:11:21.609 }, 00:11:21.609 { 00:11:21.609 "name": "BaseBdev4", 00:11:21.609 "uuid": "10ab8acf-857e-58c0-becd-ee23e27081f8", 00:11:21.609 "is_configured": true, 00:11:21.609 "data_offset": 2048, 00:11:21.609 "data_size": 63488 00:11:21.609 } 00:11:21.609 ] 00:11:21.609 }' 00:11:21.609 01:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.609 01:31:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.868 01:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:21.868 01:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:22.127 [2024-11-17 01:31:30.369013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.066 "name": "raid_bdev1", 00:11:23.066 "uuid": "385c281b-e247-4875-b5a5-9260f8ba34f0", 00:11:23.066 "strip_size_kb": 64, 00:11:23.066 "state": "online", 00:11:23.066 "raid_level": "raid0", 00:11:23.066 "superblock": true, 00:11:23.066 "num_base_bdevs": 4, 00:11:23.066 "num_base_bdevs_discovered": 4, 00:11:23.066 "num_base_bdevs_operational": 4, 00:11:23.066 "base_bdevs_list": [ 00:11:23.066 { 00:11:23.066 "name": "BaseBdev1", 00:11:23.066 "uuid": "6814c2c4-3a62-53ab-a0d1-d089e50e1e9a", 00:11:23.066 "is_configured": true, 00:11:23.066 "data_offset": 2048, 00:11:23.066 "data_size": 63488 00:11:23.066 }, 00:11:23.066 { 00:11:23.066 "name": "BaseBdev2", 00:11:23.066 "uuid": "b2f6bd01-f086-51b7-ae15-583af89db005", 00:11:23.066 "is_configured": true, 00:11:23.066 "data_offset": 2048, 00:11:23.066 "data_size": 63488 00:11:23.066 }, 00:11:23.066 { 00:11:23.066 "name": "BaseBdev3", 00:11:23.066 "uuid": "12204b9c-a493-5758-99d9-2ac0b9034683", 00:11:23.066 "is_configured": true, 00:11:23.066 "data_offset": 2048, 00:11:23.066 "data_size": 63488 00:11:23.066 }, 00:11:23.066 { 00:11:23.066 "name": "BaseBdev4", 00:11:23.066 "uuid": "10ab8acf-857e-58c0-becd-ee23e27081f8", 00:11:23.066 "is_configured": true, 00:11:23.066 "data_offset": 2048, 00:11:23.066 "data_size": 63488 00:11:23.066 } 00:11:23.066 ] 00:11:23.066 }' 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.066 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.326 [2024-11-17 01:31:31.764812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.326 [2024-11-17 01:31:31.764856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.326 [2024-11-17 01:31:31.767287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.326 [2024-11-17 01:31:31.767341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.326 [2024-11-17 01:31:31.767383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.326 [2024-11-17 01:31:31.767394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:23.326 { 00:11:23.326 "results": [ 00:11:23.326 { 00:11:23.326 "job": "raid_bdev1", 00:11:23.326 "core_mask": "0x1", 00:11:23.326 "workload": "randrw", 00:11:23.326 "percentage": 50, 00:11:23.326 "status": "finished", 00:11:23.326 "queue_depth": 1, 00:11:23.326 "io_size": 131072, 00:11:23.326 "runtime": 1.396694, 00:11:23.326 "iops": 16783.203765463302, 00:11:23.326 "mibps": 2097.900470682913, 00:11:23.326 "io_failed": 1, 00:11:23.326 "io_timeout": 0, 00:11:23.326 "avg_latency_us": 83.00297193593852, 00:11:23.326 "min_latency_us": 24.370305676855896, 00:11:23.326 "max_latency_us": 1266.3615720524017 00:11:23.326 } 00:11:23.326 ], 00:11:23.326 "core_count": 1 00:11:23.326 } 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70770 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70770 ']' 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70770 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.326 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70770 00:11:23.586 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.586 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.586 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70770' 00:11:23.586 killing process with pid 70770 00:11:23.586 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70770 00:11:23.586 [2024-11-17 01:31:31.808337] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.586 01:31:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70770 00:11:23.846 [2024-11-17 01:31:32.108474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hku2yCYWup 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:24.784 ************************************ 00:11:24.784 END TEST raid_read_error_test 00:11:24.784 ************************************ 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:24.784 00:11:24.784 real 0m4.547s 00:11:24.784 user 0m5.299s 00:11:24.784 sys 0m0.583s 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.784 01:31:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.044 01:31:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:25.044 01:31:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.044 01:31:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.044 01:31:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.044 ************************************ 00:11:25.044 START TEST raid_write_error_test 00:11:25.044 ************************************ 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.J2Hy7aNhoG 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70918 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70918 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70918 ']' 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.044 01:31:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.044 [2024-11-17 01:31:33.392395] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:25.044 [2024-11-17 01:31:33.392565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70918 ] 00:11:25.312 [2024-11-17 01:31:33.564755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.312 [2024-11-17 01:31:33.671346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.587 [2024-11-17 01:31:33.858425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.587 [2024-11-17 01:31:33.858466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.847 BaseBdev1_malloc 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.847 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 true 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 [2024-11-17 01:31:34.315134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:26.108 [2024-11-17 01:31:34.315186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.108 [2024-11-17 01:31:34.315221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:26.108 [2024-11-17 01:31:34.315232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.108 [2024-11-17 01:31:34.317231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.108 [2024-11-17 01:31:34.317268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:26.108 BaseBdev1 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 BaseBdev2_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 true 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 [2024-11-17 01:31:34.380655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:26.108 [2024-11-17 01:31:34.380705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.108 [2024-11-17 01:31:34.380722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:26.108 [2024-11-17 01:31:34.380732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.108 [2024-11-17 01:31:34.382706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.108 [2024-11-17 01:31:34.382741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:26.108 BaseBdev2 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 BaseBdev3_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 true 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 [2024-11-17 01:31:34.467421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:26.108 [2024-11-17 01:31:34.467466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.108 [2024-11-17 01:31:34.467480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:26.108 [2024-11-17 01:31:34.467490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.108 [2024-11-17 01:31:34.469460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.108 [2024-11-17 01:31:34.469490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:26.108 BaseBdev3 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 BaseBdev4_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 true 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 [2024-11-17 01:31:34.532897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:26.108 [2024-11-17 01:31:34.532942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.108 [2024-11-17 01:31:34.532974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.108 [2024-11-17 01:31:34.532984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.108 [2024-11-17 01:31:34.534968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.108 [2024-11-17 01:31:34.535001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:26.108 BaseBdev4 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.108 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.108 [2024-11-17 01:31:34.544934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.108 [2024-11-17 01:31:34.546656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.109 [2024-11-17 01:31:34.546727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.109 [2024-11-17 01:31:34.546800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.109 [2024-11-17 01:31:34.547014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:26.109 [2024-11-17 01:31:34.547039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:26.109 [2024-11-17 01:31:34.547271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:26.109 [2024-11-17 01:31:34.547417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:26.109 [2024-11-17 01:31:34.547437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:26.109 [2024-11-17 01:31:34.547571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.109 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.368 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.368 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.368 "name": "raid_bdev1", 00:11:26.368 "uuid": "cbdf43cb-d0c2-4312-a418-4f1d94a97d3d", 00:11:26.368 "strip_size_kb": 64, 00:11:26.368 "state": "online", 00:11:26.368 "raid_level": "raid0", 00:11:26.368 "superblock": true, 00:11:26.368 "num_base_bdevs": 4, 00:11:26.368 "num_base_bdevs_discovered": 4, 00:11:26.368 "num_base_bdevs_operational": 4, 00:11:26.368 "base_bdevs_list": [ 00:11:26.368 { 00:11:26.368 "name": "BaseBdev1", 00:11:26.368 "uuid": "e78ce292-c731-50ac-a37e-81ad4f6d7f65", 00:11:26.368 "is_configured": true, 00:11:26.368 "data_offset": 2048, 00:11:26.368 "data_size": 63488 00:11:26.368 }, 00:11:26.368 { 00:11:26.368 "name": "BaseBdev2", 00:11:26.368 "uuid": "c67cef18-c4dc-5d83-9007-02dd1c74488c", 00:11:26.368 "is_configured": true, 00:11:26.368 "data_offset": 2048, 00:11:26.368 "data_size": 63488 00:11:26.368 }, 00:11:26.368 { 00:11:26.368 "name": "BaseBdev3", 00:11:26.368 "uuid": "bfa5478a-2ca8-58ff-a0fb-b453c663b000", 00:11:26.368 "is_configured": true, 00:11:26.368 "data_offset": 2048, 00:11:26.368 "data_size": 63488 00:11:26.368 }, 00:11:26.368 { 00:11:26.368 "name": "BaseBdev4", 00:11:26.368 "uuid": "35d9e2fb-277d-55d3-a5d8-5da8fc159264", 00:11:26.368 "is_configured": true, 00:11:26.368 "data_offset": 2048, 00:11:26.368 "data_size": 63488 00:11:26.368 } 00:11:26.368 ] 00:11:26.368 }' 00:11:26.368 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.368 01:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.626 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:26.626 01:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:26.626 [2024-11-17 01:31:35.069169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.562 01:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.562 01:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.562 01:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.562 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.562 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.821 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.821 01:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.821 "name": "raid_bdev1", 00:11:27.821 "uuid": "cbdf43cb-d0c2-4312-a418-4f1d94a97d3d", 00:11:27.821 "strip_size_kb": 64, 00:11:27.821 "state": "online", 00:11:27.821 "raid_level": "raid0", 00:11:27.821 "superblock": true, 00:11:27.821 "num_base_bdevs": 4, 00:11:27.821 "num_base_bdevs_discovered": 4, 00:11:27.821 "num_base_bdevs_operational": 4, 00:11:27.821 "base_bdevs_list": [ 00:11:27.821 { 00:11:27.821 "name": "BaseBdev1", 00:11:27.821 "uuid": "e78ce292-c731-50ac-a37e-81ad4f6d7f65", 00:11:27.821 "is_configured": true, 00:11:27.821 "data_offset": 2048, 00:11:27.822 "data_size": 63488 00:11:27.822 }, 00:11:27.822 { 00:11:27.822 "name": "BaseBdev2", 00:11:27.822 "uuid": "c67cef18-c4dc-5d83-9007-02dd1c74488c", 00:11:27.822 "is_configured": true, 00:11:27.822 "data_offset": 2048, 00:11:27.822 "data_size": 63488 00:11:27.822 }, 00:11:27.822 { 00:11:27.822 "name": "BaseBdev3", 00:11:27.822 "uuid": "bfa5478a-2ca8-58ff-a0fb-b453c663b000", 00:11:27.822 "is_configured": true, 00:11:27.822 "data_offset": 2048, 00:11:27.822 "data_size": 63488 00:11:27.822 }, 00:11:27.822 { 00:11:27.822 "name": "BaseBdev4", 00:11:27.822 "uuid": "35d9e2fb-277d-55d3-a5d8-5da8fc159264", 00:11:27.822 "is_configured": true, 00:11:27.822 "data_offset": 2048, 00:11:27.822 "data_size": 63488 00:11:27.822 } 00:11:27.822 ] 00:11:27.822 }' 00:11:27.822 01:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.822 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.081 [2024-11-17 01:31:36.465035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.081 [2024-11-17 01:31:36.465080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.081 [2024-11-17 01:31:36.467721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.081 [2024-11-17 01:31:36.467796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.081 [2024-11-17 01:31:36.467841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.081 [2024-11-17 01:31:36.467854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70918 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70918 ']' 00:11:28.081 { 00:11:28.081 "results": [ 00:11:28.081 { 00:11:28.081 "job": "raid_bdev1", 00:11:28.081 "core_mask": "0x1", 00:11:28.081 "workload": "randrw", 00:11:28.081 "percentage": 50, 00:11:28.081 "status": "finished", 00:11:28.081 "queue_depth": 1, 00:11:28.081 "io_size": 131072, 00:11:28.081 "runtime": 1.396769, 00:11:28.081 "iops": 16591.863078289967, 00:11:28.081 "mibps": 2073.982884786246, 00:11:28.081 "io_failed": 1, 00:11:28.081 "io_timeout": 0, 00:11:28.081 "avg_latency_us": 83.89409312147937, 00:11:28.081 "min_latency_us": 25.041048034934498, 00:11:28.081 "max_latency_us": 1259.2069868995634 00:11:28.081 } 00:11:28.081 ], 00:11:28.081 "core_count": 1 00:11:28.081 } 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70918 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70918 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.081 killing process with pid 70918 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70918' 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70918 00:11:28.081 [2024-11-17 01:31:36.506932] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.081 01:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70918 00:11:28.650 [2024-11-17 01:31:36.814159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.J2Hy7aNhoG 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:29.587 00:11:29.587 real 0m4.655s 00:11:29.587 user 0m5.503s 00:11:29.587 sys 0m0.602s 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.587 01:31:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.587 ************************************ 00:11:29.587 END TEST raid_write_error_test 00:11:29.587 ************************************ 00:11:29.587 01:31:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:29.587 01:31:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:29.587 01:31:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.587 01:31:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.587 01:31:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.587 ************************************ 00:11:29.587 START TEST raid_state_function_test 00:11:29.587 ************************************ 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71059 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71059' 00:11:29.587 Process raid pid: 71059 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71059 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71059 ']' 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.587 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.847 [2024-11-17 01:31:38.105052] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:29.847 [2024-11-17 01:31:38.105161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.847 [2024-11-17 01:31:38.276124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.106 [2024-11-17 01:31:38.393063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.366 [2024-11-17 01:31:38.592844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.366 [2024-11-17 01:31:38.592888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.626 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.626 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.626 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:30.626 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.626 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.626 [2024-11-17 01:31:38.935010] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.626 [2024-11-17 01:31:38.935061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.626 [2024-11-17 01:31:38.935078] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.626 [2024-11-17 01:31:38.935087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.626 [2024-11-17 01:31:38.935094] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.627 [2024-11-17 01:31:38.935102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.627 [2024-11-17 01:31:38.935108] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:30.627 [2024-11-17 01:31:38.935117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.627 "name": "Existed_Raid", 00:11:30.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.627 "strip_size_kb": 64, 00:11:30.627 "state": "configuring", 00:11:30.627 "raid_level": "concat", 00:11:30.627 "superblock": false, 00:11:30.627 "num_base_bdevs": 4, 00:11:30.627 "num_base_bdevs_discovered": 0, 00:11:30.627 "num_base_bdevs_operational": 4, 00:11:30.627 "base_bdevs_list": [ 00:11:30.627 { 00:11:30.627 "name": "BaseBdev1", 00:11:30.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.627 "is_configured": false, 00:11:30.627 "data_offset": 0, 00:11:30.627 "data_size": 0 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "name": "BaseBdev2", 00:11:30.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.627 "is_configured": false, 00:11:30.627 "data_offset": 0, 00:11:30.627 "data_size": 0 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "name": "BaseBdev3", 00:11:30.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.627 "is_configured": false, 00:11:30.627 "data_offset": 0, 00:11:30.627 "data_size": 0 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "name": "BaseBdev4", 00:11:30.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.627 "is_configured": false, 00:11:30.627 "data_offset": 0, 00:11:30.627 "data_size": 0 00:11:30.627 } 00:11:30.627 ] 00:11:30.627 }' 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.627 01:31:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.197 [2024-11-17 01:31:39.362261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.197 [2024-11-17 01:31:39.362309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.197 [2024-11-17 01:31:39.374207] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.197 [2024-11-17 01:31:39.374249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.197 [2024-11-17 01:31:39.374258] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.197 [2024-11-17 01:31:39.374267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.197 [2024-11-17 01:31:39.374273] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.197 [2024-11-17 01:31:39.374282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.197 [2024-11-17 01:31:39.374288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.197 [2024-11-17 01:31:39.374296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.197 [2024-11-17 01:31:39.420380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.197 BaseBdev1 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.197 [ 00:11:31.197 { 00:11:31.197 "name": "BaseBdev1", 00:11:31.197 "aliases": [ 00:11:31.197 "1c2b6731-667e-4b41-ae63-61022b1cc9db" 00:11:31.197 ], 00:11:31.197 "product_name": "Malloc disk", 00:11:31.197 "block_size": 512, 00:11:31.197 "num_blocks": 65536, 00:11:31.197 "uuid": "1c2b6731-667e-4b41-ae63-61022b1cc9db", 00:11:31.197 "assigned_rate_limits": { 00:11:31.197 "rw_ios_per_sec": 0, 00:11:31.197 "rw_mbytes_per_sec": 0, 00:11:31.197 "r_mbytes_per_sec": 0, 00:11:31.197 "w_mbytes_per_sec": 0 00:11:31.197 }, 00:11:31.197 "claimed": true, 00:11:31.197 "claim_type": "exclusive_write", 00:11:31.197 "zoned": false, 00:11:31.197 "supported_io_types": { 00:11:31.197 "read": true, 00:11:31.197 "write": true, 00:11:31.197 "unmap": true, 00:11:31.197 "flush": true, 00:11:31.197 "reset": true, 00:11:31.197 "nvme_admin": false, 00:11:31.197 "nvme_io": false, 00:11:31.197 "nvme_io_md": false, 00:11:31.197 "write_zeroes": true, 00:11:31.197 "zcopy": true, 00:11:31.197 "get_zone_info": false, 00:11:31.197 "zone_management": false, 00:11:31.197 "zone_append": false, 00:11:31.197 "compare": false, 00:11:31.197 "compare_and_write": false, 00:11:31.197 "abort": true, 00:11:31.197 "seek_hole": false, 00:11:31.197 "seek_data": false, 00:11:31.197 "copy": true, 00:11:31.197 "nvme_iov_md": false 00:11:31.197 }, 00:11:31.197 "memory_domains": [ 00:11:31.197 { 00:11:31.197 "dma_device_id": "system", 00:11:31.197 "dma_device_type": 1 00:11:31.197 }, 00:11:31.197 { 00:11:31.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.197 "dma_device_type": 2 00:11:31.197 } 00:11:31.197 ], 00:11:31.197 "driver_specific": {} 00:11:31.197 } 00:11:31.197 ] 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.197 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.198 "name": "Existed_Raid", 00:11:31.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.198 "strip_size_kb": 64, 00:11:31.198 "state": "configuring", 00:11:31.198 "raid_level": "concat", 00:11:31.198 "superblock": false, 00:11:31.198 "num_base_bdevs": 4, 00:11:31.198 "num_base_bdevs_discovered": 1, 00:11:31.198 "num_base_bdevs_operational": 4, 00:11:31.198 "base_bdevs_list": [ 00:11:31.198 { 00:11:31.198 "name": "BaseBdev1", 00:11:31.198 "uuid": "1c2b6731-667e-4b41-ae63-61022b1cc9db", 00:11:31.198 "is_configured": true, 00:11:31.198 "data_offset": 0, 00:11:31.198 "data_size": 65536 00:11:31.198 }, 00:11:31.198 { 00:11:31.198 "name": "BaseBdev2", 00:11:31.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.198 "is_configured": false, 00:11:31.198 "data_offset": 0, 00:11:31.198 "data_size": 0 00:11:31.198 }, 00:11:31.198 { 00:11:31.198 "name": "BaseBdev3", 00:11:31.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.198 "is_configured": false, 00:11:31.198 "data_offset": 0, 00:11:31.198 "data_size": 0 00:11:31.198 }, 00:11:31.198 { 00:11:31.198 "name": "BaseBdev4", 00:11:31.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.198 "is_configured": false, 00:11:31.198 "data_offset": 0, 00:11:31.198 "data_size": 0 00:11:31.198 } 00:11:31.198 ] 00:11:31.198 }' 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.198 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.771 [2024-11-17 01:31:39.935603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.771 [2024-11-17 01:31:39.935663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.771 [2024-11-17 01:31:39.943633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.771 [2024-11-17 01:31:39.945433] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.771 [2024-11-17 01:31:39.945473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.771 [2024-11-17 01:31:39.945483] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.771 [2024-11-17 01:31:39.945493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.771 [2024-11-17 01:31:39.945499] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.771 [2024-11-17 01:31:39.945508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.771 "name": "Existed_Raid", 00:11:31.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.771 "strip_size_kb": 64, 00:11:31.771 "state": "configuring", 00:11:31.771 "raid_level": "concat", 00:11:31.771 "superblock": false, 00:11:31.771 "num_base_bdevs": 4, 00:11:31.771 "num_base_bdevs_discovered": 1, 00:11:31.771 "num_base_bdevs_operational": 4, 00:11:31.771 "base_bdevs_list": [ 00:11:31.771 { 00:11:31.771 "name": "BaseBdev1", 00:11:31.771 "uuid": "1c2b6731-667e-4b41-ae63-61022b1cc9db", 00:11:31.771 "is_configured": true, 00:11:31.771 "data_offset": 0, 00:11:31.771 "data_size": 65536 00:11:31.771 }, 00:11:31.771 { 00:11:31.771 "name": "BaseBdev2", 00:11:31.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.771 "is_configured": false, 00:11:31.771 "data_offset": 0, 00:11:31.771 "data_size": 0 00:11:31.771 }, 00:11:31.771 { 00:11:31.771 "name": "BaseBdev3", 00:11:31.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.771 "is_configured": false, 00:11:31.771 "data_offset": 0, 00:11:31.771 "data_size": 0 00:11:31.771 }, 00:11:31.771 { 00:11:31.771 "name": "BaseBdev4", 00:11:31.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.771 "is_configured": false, 00:11:31.771 "data_offset": 0, 00:11:31.771 "data_size": 0 00:11:31.771 } 00:11:31.771 ] 00:11:31.771 }' 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.771 01:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.030 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 [2024-11-17 01:31:40.435309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.031 BaseBdev2 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 [ 00:11:32.031 { 00:11:32.031 "name": "BaseBdev2", 00:11:32.031 "aliases": [ 00:11:32.031 "d5fa2f09-aec5-4960-bab3-d033026913ab" 00:11:32.031 ], 00:11:32.031 "product_name": "Malloc disk", 00:11:32.031 "block_size": 512, 00:11:32.031 "num_blocks": 65536, 00:11:32.031 "uuid": "d5fa2f09-aec5-4960-bab3-d033026913ab", 00:11:32.031 "assigned_rate_limits": { 00:11:32.031 "rw_ios_per_sec": 0, 00:11:32.031 "rw_mbytes_per_sec": 0, 00:11:32.031 "r_mbytes_per_sec": 0, 00:11:32.031 "w_mbytes_per_sec": 0 00:11:32.031 }, 00:11:32.031 "claimed": true, 00:11:32.031 "claim_type": "exclusive_write", 00:11:32.031 "zoned": false, 00:11:32.031 "supported_io_types": { 00:11:32.031 "read": true, 00:11:32.031 "write": true, 00:11:32.031 "unmap": true, 00:11:32.031 "flush": true, 00:11:32.031 "reset": true, 00:11:32.031 "nvme_admin": false, 00:11:32.031 "nvme_io": false, 00:11:32.031 "nvme_io_md": false, 00:11:32.031 "write_zeroes": true, 00:11:32.031 "zcopy": true, 00:11:32.031 "get_zone_info": false, 00:11:32.031 "zone_management": false, 00:11:32.031 "zone_append": false, 00:11:32.031 "compare": false, 00:11:32.031 "compare_and_write": false, 00:11:32.031 "abort": true, 00:11:32.031 "seek_hole": false, 00:11:32.031 "seek_data": false, 00:11:32.031 "copy": true, 00:11:32.031 "nvme_iov_md": false 00:11:32.031 }, 00:11:32.031 "memory_domains": [ 00:11:32.031 { 00:11:32.031 "dma_device_id": "system", 00:11:32.031 "dma_device_type": 1 00:11:32.031 }, 00:11:32.031 { 00:11:32.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.031 "dma_device_type": 2 00:11:32.031 } 00:11:32.031 ], 00:11:32.031 "driver_specific": {} 00:11:32.031 } 00:11:32.031 ] 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.031 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.291 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.291 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.291 "name": "Existed_Raid", 00:11:32.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.291 "strip_size_kb": 64, 00:11:32.291 "state": "configuring", 00:11:32.291 "raid_level": "concat", 00:11:32.291 "superblock": false, 00:11:32.291 "num_base_bdevs": 4, 00:11:32.291 "num_base_bdevs_discovered": 2, 00:11:32.291 "num_base_bdevs_operational": 4, 00:11:32.291 "base_bdevs_list": [ 00:11:32.291 { 00:11:32.291 "name": "BaseBdev1", 00:11:32.291 "uuid": "1c2b6731-667e-4b41-ae63-61022b1cc9db", 00:11:32.291 "is_configured": true, 00:11:32.291 "data_offset": 0, 00:11:32.291 "data_size": 65536 00:11:32.291 }, 00:11:32.291 { 00:11:32.291 "name": "BaseBdev2", 00:11:32.291 "uuid": "d5fa2f09-aec5-4960-bab3-d033026913ab", 00:11:32.291 "is_configured": true, 00:11:32.291 "data_offset": 0, 00:11:32.291 "data_size": 65536 00:11:32.291 }, 00:11:32.291 { 00:11:32.291 "name": "BaseBdev3", 00:11:32.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.291 "is_configured": false, 00:11:32.291 "data_offset": 0, 00:11:32.291 "data_size": 0 00:11:32.291 }, 00:11:32.291 { 00:11:32.291 "name": "BaseBdev4", 00:11:32.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.291 "is_configured": false, 00:11:32.291 "data_offset": 0, 00:11:32.291 "data_size": 0 00:11:32.291 } 00:11:32.291 ] 00:11:32.291 }' 00:11:32.291 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.291 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.551 [2024-11-17 01:31:40.961942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.551 BaseBdev3 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.551 01:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.551 [ 00:11:32.551 { 00:11:32.551 "name": "BaseBdev3", 00:11:32.551 "aliases": [ 00:11:32.551 "67ea729e-917c-406f-a519-7f2b9d0ca020" 00:11:32.551 ], 00:11:32.551 "product_name": "Malloc disk", 00:11:32.551 "block_size": 512, 00:11:32.551 "num_blocks": 65536, 00:11:32.551 "uuid": "67ea729e-917c-406f-a519-7f2b9d0ca020", 00:11:32.551 "assigned_rate_limits": { 00:11:32.551 "rw_ios_per_sec": 0, 00:11:32.551 "rw_mbytes_per_sec": 0, 00:11:32.551 "r_mbytes_per_sec": 0, 00:11:32.551 "w_mbytes_per_sec": 0 00:11:32.551 }, 00:11:32.551 "claimed": true, 00:11:32.551 "claim_type": "exclusive_write", 00:11:32.551 "zoned": false, 00:11:32.551 "supported_io_types": { 00:11:32.551 "read": true, 00:11:32.551 "write": true, 00:11:32.551 "unmap": true, 00:11:32.551 "flush": true, 00:11:32.551 "reset": true, 00:11:32.551 "nvme_admin": false, 00:11:32.551 "nvme_io": false, 00:11:32.551 "nvme_io_md": false, 00:11:32.551 "write_zeroes": true, 00:11:32.551 "zcopy": true, 00:11:32.551 "get_zone_info": false, 00:11:32.551 "zone_management": false, 00:11:32.551 "zone_append": false, 00:11:32.551 "compare": false, 00:11:32.551 "compare_and_write": false, 00:11:32.551 "abort": true, 00:11:32.551 "seek_hole": false, 00:11:32.551 "seek_data": false, 00:11:32.551 "copy": true, 00:11:32.551 "nvme_iov_md": false 00:11:32.551 }, 00:11:32.551 "memory_domains": [ 00:11:32.551 { 00:11:32.551 "dma_device_id": "system", 00:11:32.551 "dma_device_type": 1 00:11:32.551 }, 00:11:32.551 { 00:11:32.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.551 "dma_device_type": 2 00:11:32.551 } 00:11:32.551 ], 00:11:32.551 "driver_specific": {} 00:11:32.551 } 00:11:32.551 ] 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.551 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.811 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.811 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.811 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.811 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.811 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.812 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.812 "name": "Existed_Raid", 00:11:32.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.812 "strip_size_kb": 64, 00:11:32.812 "state": "configuring", 00:11:32.812 "raid_level": "concat", 00:11:32.812 "superblock": false, 00:11:32.812 "num_base_bdevs": 4, 00:11:32.812 "num_base_bdevs_discovered": 3, 00:11:32.812 "num_base_bdevs_operational": 4, 00:11:32.812 "base_bdevs_list": [ 00:11:32.812 { 00:11:32.812 "name": "BaseBdev1", 00:11:32.812 "uuid": "1c2b6731-667e-4b41-ae63-61022b1cc9db", 00:11:32.812 "is_configured": true, 00:11:32.812 "data_offset": 0, 00:11:32.812 "data_size": 65536 00:11:32.812 }, 00:11:32.812 { 00:11:32.812 "name": "BaseBdev2", 00:11:32.812 "uuid": "d5fa2f09-aec5-4960-bab3-d033026913ab", 00:11:32.812 "is_configured": true, 00:11:32.812 "data_offset": 0, 00:11:32.812 "data_size": 65536 00:11:32.812 }, 00:11:32.812 { 00:11:32.812 "name": "BaseBdev3", 00:11:32.812 "uuid": "67ea729e-917c-406f-a519-7f2b9d0ca020", 00:11:32.812 "is_configured": true, 00:11:32.812 "data_offset": 0, 00:11:32.812 "data_size": 65536 00:11:32.812 }, 00:11:32.812 { 00:11:32.812 "name": "BaseBdev4", 00:11:32.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.812 "is_configured": false, 00:11:32.812 "data_offset": 0, 00:11:32.812 "data_size": 0 00:11:32.812 } 00:11:32.812 ] 00:11:32.812 }' 00:11:32.812 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.812 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.072 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.072 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.072 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.072 [2024-11-17 01:31:41.486031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.072 [2024-11-17 01:31:41.486080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:33.072 [2024-11-17 01:31:41.486089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:33.072 [2024-11-17 01:31:41.486357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.072 [2024-11-17 01:31:41.486521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:33.072 [2024-11-17 01:31:41.486537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:33.072 [2024-11-17 01:31:41.486827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.072 BaseBdev4 00:11:33.072 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.072 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:33.072 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:33.072 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.073 [ 00:11:33.073 { 00:11:33.073 "name": "BaseBdev4", 00:11:33.073 "aliases": [ 00:11:33.073 "5f1b43b6-78f9-4127-b49c-64c63fe2e48f" 00:11:33.073 ], 00:11:33.073 "product_name": "Malloc disk", 00:11:33.073 "block_size": 512, 00:11:33.073 "num_blocks": 65536, 00:11:33.073 "uuid": "5f1b43b6-78f9-4127-b49c-64c63fe2e48f", 00:11:33.073 "assigned_rate_limits": { 00:11:33.073 "rw_ios_per_sec": 0, 00:11:33.073 "rw_mbytes_per_sec": 0, 00:11:33.073 "r_mbytes_per_sec": 0, 00:11:33.073 "w_mbytes_per_sec": 0 00:11:33.073 }, 00:11:33.073 "claimed": true, 00:11:33.073 "claim_type": "exclusive_write", 00:11:33.073 "zoned": false, 00:11:33.073 "supported_io_types": { 00:11:33.073 "read": true, 00:11:33.073 "write": true, 00:11:33.073 "unmap": true, 00:11:33.073 "flush": true, 00:11:33.073 "reset": true, 00:11:33.073 "nvme_admin": false, 00:11:33.073 "nvme_io": false, 00:11:33.073 "nvme_io_md": false, 00:11:33.073 "write_zeroes": true, 00:11:33.073 "zcopy": true, 00:11:33.073 "get_zone_info": false, 00:11:33.073 "zone_management": false, 00:11:33.073 "zone_append": false, 00:11:33.073 "compare": false, 00:11:33.073 "compare_and_write": false, 00:11:33.073 "abort": true, 00:11:33.073 "seek_hole": false, 00:11:33.073 "seek_data": false, 00:11:33.073 "copy": true, 00:11:33.073 "nvme_iov_md": false 00:11:33.073 }, 00:11:33.073 "memory_domains": [ 00:11:33.073 { 00:11:33.073 "dma_device_id": "system", 00:11:33.073 "dma_device_type": 1 00:11:33.073 }, 00:11:33.073 { 00:11:33.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.073 "dma_device_type": 2 00:11:33.073 } 00:11:33.073 ], 00:11:33.073 "driver_specific": {} 00:11:33.073 } 00:11:33.073 ] 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.073 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.333 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.333 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.333 "name": "Existed_Raid", 00:11:33.333 "uuid": "de2c7d4b-8b02-41aa-a7f1-1b4e2d03ec7f", 00:11:33.333 "strip_size_kb": 64, 00:11:33.333 "state": "online", 00:11:33.333 "raid_level": "concat", 00:11:33.333 "superblock": false, 00:11:33.333 "num_base_bdevs": 4, 00:11:33.333 "num_base_bdevs_discovered": 4, 00:11:33.333 "num_base_bdevs_operational": 4, 00:11:33.333 "base_bdevs_list": [ 00:11:33.333 { 00:11:33.333 "name": "BaseBdev1", 00:11:33.333 "uuid": "1c2b6731-667e-4b41-ae63-61022b1cc9db", 00:11:33.333 "is_configured": true, 00:11:33.333 "data_offset": 0, 00:11:33.333 "data_size": 65536 00:11:33.333 }, 00:11:33.333 { 00:11:33.333 "name": "BaseBdev2", 00:11:33.333 "uuid": "d5fa2f09-aec5-4960-bab3-d033026913ab", 00:11:33.333 "is_configured": true, 00:11:33.333 "data_offset": 0, 00:11:33.333 "data_size": 65536 00:11:33.333 }, 00:11:33.333 { 00:11:33.333 "name": "BaseBdev3", 00:11:33.333 "uuid": "67ea729e-917c-406f-a519-7f2b9d0ca020", 00:11:33.333 "is_configured": true, 00:11:33.333 "data_offset": 0, 00:11:33.333 "data_size": 65536 00:11:33.333 }, 00:11:33.333 { 00:11:33.333 "name": "BaseBdev4", 00:11:33.333 "uuid": "5f1b43b6-78f9-4127-b49c-64c63fe2e48f", 00:11:33.333 "is_configured": true, 00:11:33.333 "data_offset": 0, 00:11:33.333 "data_size": 65536 00:11:33.333 } 00:11:33.333 ] 00:11:33.333 }' 00:11:33.333 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.333 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.594 [2024-11-17 01:31:41.949648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.594 "name": "Existed_Raid", 00:11:33.594 "aliases": [ 00:11:33.594 "de2c7d4b-8b02-41aa-a7f1-1b4e2d03ec7f" 00:11:33.594 ], 00:11:33.594 "product_name": "Raid Volume", 00:11:33.594 "block_size": 512, 00:11:33.594 "num_blocks": 262144, 00:11:33.594 "uuid": "de2c7d4b-8b02-41aa-a7f1-1b4e2d03ec7f", 00:11:33.594 "assigned_rate_limits": { 00:11:33.594 "rw_ios_per_sec": 0, 00:11:33.594 "rw_mbytes_per_sec": 0, 00:11:33.594 "r_mbytes_per_sec": 0, 00:11:33.594 "w_mbytes_per_sec": 0 00:11:33.594 }, 00:11:33.594 "claimed": false, 00:11:33.594 "zoned": false, 00:11:33.594 "supported_io_types": { 00:11:33.594 "read": true, 00:11:33.594 "write": true, 00:11:33.594 "unmap": true, 00:11:33.594 "flush": true, 00:11:33.594 "reset": true, 00:11:33.594 "nvme_admin": false, 00:11:33.594 "nvme_io": false, 00:11:33.594 "nvme_io_md": false, 00:11:33.594 "write_zeroes": true, 00:11:33.594 "zcopy": false, 00:11:33.594 "get_zone_info": false, 00:11:33.594 "zone_management": false, 00:11:33.594 "zone_append": false, 00:11:33.594 "compare": false, 00:11:33.594 "compare_and_write": false, 00:11:33.594 "abort": false, 00:11:33.594 "seek_hole": false, 00:11:33.594 "seek_data": false, 00:11:33.594 "copy": false, 00:11:33.594 "nvme_iov_md": false 00:11:33.594 }, 00:11:33.594 "memory_domains": [ 00:11:33.594 { 00:11:33.594 "dma_device_id": "system", 00:11:33.594 "dma_device_type": 1 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.594 "dma_device_type": 2 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "dma_device_id": "system", 00:11:33.594 "dma_device_type": 1 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.594 "dma_device_type": 2 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "dma_device_id": "system", 00:11:33.594 "dma_device_type": 1 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.594 "dma_device_type": 2 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "dma_device_id": "system", 00:11:33.594 "dma_device_type": 1 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.594 "dma_device_type": 2 00:11:33.594 } 00:11:33.594 ], 00:11:33.594 "driver_specific": { 00:11:33.594 "raid": { 00:11:33.594 "uuid": "de2c7d4b-8b02-41aa-a7f1-1b4e2d03ec7f", 00:11:33.594 "strip_size_kb": 64, 00:11:33.594 "state": "online", 00:11:33.594 "raid_level": "concat", 00:11:33.594 "superblock": false, 00:11:33.594 "num_base_bdevs": 4, 00:11:33.594 "num_base_bdevs_discovered": 4, 00:11:33.594 "num_base_bdevs_operational": 4, 00:11:33.594 "base_bdevs_list": [ 00:11:33.594 { 00:11:33.594 "name": "BaseBdev1", 00:11:33.594 "uuid": "1c2b6731-667e-4b41-ae63-61022b1cc9db", 00:11:33.594 "is_configured": true, 00:11:33.594 "data_offset": 0, 00:11:33.594 "data_size": 65536 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "name": "BaseBdev2", 00:11:33.594 "uuid": "d5fa2f09-aec5-4960-bab3-d033026913ab", 00:11:33.594 "is_configured": true, 00:11:33.594 "data_offset": 0, 00:11:33.594 "data_size": 65536 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "name": "BaseBdev3", 00:11:33.594 "uuid": "67ea729e-917c-406f-a519-7f2b9d0ca020", 00:11:33.594 "is_configured": true, 00:11:33.594 "data_offset": 0, 00:11:33.594 "data_size": 65536 00:11:33.594 }, 00:11:33.594 { 00:11:33.594 "name": "BaseBdev4", 00:11:33.594 "uuid": "5f1b43b6-78f9-4127-b49c-64c63fe2e48f", 00:11:33.594 "is_configured": true, 00:11:33.594 "data_offset": 0, 00:11:33.594 "data_size": 65536 00:11:33.594 } 00:11:33.594 ] 00:11:33.594 } 00:11:33.594 } 00:11:33.594 }' 00:11:33.594 01:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.594 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:33.594 BaseBdev2 00:11:33.594 BaseBdev3 00:11:33.594 BaseBdev4' 00:11:33.594 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.855 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.855 [2024-11-17 01:31:42.236861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.855 [2024-11-17 01:31:42.236892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.855 [2024-11-17 01:31:42.236942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.115 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.115 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.116 "name": "Existed_Raid", 00:11:34.116 "uuid": "de2c7d4b-8b02-41aa-a7f1-1b4e2d03ec7f", 00:11:34.116 "strip_size_kb": 64, 00:11:34.116 "state": "offline", 00:11:34.116 "raid_level": "concat", 00:11:34.116 "superblock": false, 00:11:34.116 "num_base_bdevs": 4, 00:11:34.116 "num_base_bdevs_discovered": 3, 00:11:34.116 "num_base_bdevs_operational": 3, 00:11:34.116 "base_bdevs_list": [ 00:11:34.116 { 00:11:34.116 "name": null, 00:11:34.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.116 "is_configured": false, 00:11:34.116 "data_offset": 0, 00:11:34.116 "data_size": 65536 00:11:34.116 }, 00:11:34.116 { 00:11:34.116 "name": "BaseBdev2", 00:11:34.116 "uuid": "d5fa2f09-aec5-4960-bab3-d033026913ab", 00:11:34.116 "is_configured": true, 00:11:34.116 "data_offset": 0, 00:11:34.116 "data_size": 65536 00:11:34.116 }, 00:11:34.116 { 00:11:34.116 "name": "BaseBdev3", 00:11:34.116 "uuid": "67ea729e-917c-406f-a519-7f2b9d0ca020", 00:11:34.116 "is_configured": true, 00:11:34.116 "data_offset": 0, 00:11:34.116 "data_size": 65536 00:11:34.116 }, 00:11:34.116 { 00:11:34.116 "name": "BaseBdev4", 00:11:34.116 "uuid": "5f1b43b6-78f9-4127-b49c-64c63fe2e48f", 00:11:34.116 "is_configured": true, 00:11:34.116 "data_offset": 0, 00:11:34.116 "data_size": 65536 00:11:34.116 } 00:11:34.116 ] 00:11:34.116 }' 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.116 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.375 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:34.375 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.375 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.375 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.375 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.375 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:34.375 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.635 [2024-11-17 01:31:42.847826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.635 01:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.635 [2024-11-17 01:31:42.999406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:34.635 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.635 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:34.635 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.895 [2024-11-17 01:31:43.135027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:34.895 [2024-11-17 01:31:43.135087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.895 BaseBdev2 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.895 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.895 [ 00:11:34.895 { 00:11:34.895 "name": "BaseBdev2", 00:11:34.895 "aliases": [ 00:11:34.895 "77111af3-f42b-4b38-a15d-f3ba7dc25244" 00:11:34.895 ], 00:11:34.895 "product_name": "Malloc disk", 00:11:34.895 "block_size": 512, 00:11:34.895 "num_blocks": 65536, 00:11:34.895 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:34.895 "assigned_rate_limits": { 00:11:34.895 "rw_ios_per_sec": 0, 00:11:34.895 "rw_mbytes_per_sec": 0, 00:11:34.895 "r_mbytes_per_sec": 0, 00:11:34.895 "w_mbytes_per_sec": 0 00:11:34.895 }, 00:11:34.895 "claimed": false, 00:11:34.895 "zoned": false, 00:11:34.895 "supported_io_types": { 00:11:34.895 "read": true, 00:11:34.895 "write": true, 00:11:34.895 "unmap": true, 00:11:34.895 "flush": true, 00:11:34.895 "reset": true, 00:11:34.895 "nvme_admin": false, 00:11:34.895 "nvme_io": false, 00:11:34.895 "nvme_io_md": false, 00:11:34.895 "write_zeroes": true, 00:11:34.895 "zcopy": true, 00:11:34.895 "get_zone_info": false, 00:11:34.895 "zone_management": false, 00:11:34.895 "zone_append": false, 00:11:35.156 "compare": false, 00:11:35.156 "compare_and_write": false, 00:11:35.156 "abort": true, 00:11:35.156 "seek_hole": false, 00:11:35.156 "seek_data": false, 00:11:35.156 "copy": true, 00:11:35.156 "nvme_iov_md": false 00:11:35.156 }, 00:11:35.156 "memory_domains": [ 00:11:35.156 { 00:11:35.156 "dma_device_id": "system", 00:11:35.156 "dma_device_type": 1 00:11:35.156 }, 00:11:35.156 { 00:11:35.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.156 "dma_device_type": 2 00:11:35.156 } 00:11:35.156 ], 00:11:35.156 "driver_specific": {} 00:11:35.156 } 00:11:35.156 ] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.156 BaseBdev3 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.156 [ 00:11:35.156 { 00:11:35.156 "name": "BaseBdev3", 00:11:35.156 "aliases": [ 00:11:35.156 "cc5189f0-5a03-4d84-a46e-96f413b57396" 00:11:35.156 ], 00:11:35.156 "product_name": "Malloc disk", 00:11:35.156 "block_size": 512, 00:11:35.156 "num_blocks": 65536, 00:11:35.156 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:35.156 "assigned_rate_limits": { 00:11:35.156 "rw_ios_per_sec": 0, 00:11:35.156 "rw_mbytes_per_sec": 0, 00:11:35.156 "r_mbytes_per_sec": 0, 00:11:35.156 "w_mbytes_per_sec": 0 00:11:35.156 }, 00:11:35.156 "claimed": false, 00:11:35.156 "zoned": false, 00:11:35.156 "supported_io_types": { 00:11:35.156 "read": true, 00:11:35.156 "write": true, 00:11:35.156 "unmap": true, 00:11:35.156 "flush": true, 00:11:35.156 "reset": true, 00:11:35.156 "nvme_admin": false, 00:11:35.156 "nvme_io": false, 00:11:35.156 "nvme_io_md": false, 00:11:35.156 "write_zeroes": true, 00:11:35.156 "zcopy": true, 00:11:35.156 "get_zone_info": false, 00:11:35.156 "zone_management": false, 00:11:35.156 "zone_append": false, 00:11:35.156 "compare": false, 00:11:35.156 "compare_and_write": false, 00:11:35.156 "abort": true, 00:11:35.156 "seek_hole": false, 00:11:35.156 "seek_data": false, 00:11:35.156 "copy": true, 00:11:35.156 "nvme_iov_md": false 00:11:35.156 }, 00:11:35.156 "memory_domains": [ 00:11:35.156 { 00:11:35.156 "dma_device_id": "system", 00:11:35.156 "dma_device_type": 1 00:11:35.156 }, 00:11:35.156 { 00:11:35.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.156 "dma_device_type": 2 00:11:35.156 } 00:11:35.156 ], 00:11:35.156 "driver_specific": {} 00:11:35.156 } 00:11:35.156 ] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.156 BaseBdev4 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.156 [ 00:11:35.156 { 00:11:35.156 "name": "BaseBdev4", 00:11:35.156 "aliases": [ 00:11:35.156 "cde3349b-ae16-4c55-a53e-ef13c6c26455" 00:11:35.156 ], 00:11:35.156 "product_name": "Malloc disk", 00:11:35.156 "block_size": 512, 00:11:35.156 "num_blocks": 65536, 00:11:35.156 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:35.156 "assigned_rate_limits": { 00:11:35.156 "rw_ios_per_sec": 0, 00:11:35.156 "rw_mbytes_per_sec": 0, 00:11:35.156 "r_mbytes_per_sec": 0, 00:11:35.156 "w_mbytes_per_sec": 0 00:11:35.156 }, 00:11:35.156 "claimed": false, 00:11:35.156 "zoned": false, 00:11:35.156 "supported_io_types": { 00:11:35.156 "read": true, 00:11:35.156 "write": true, 00:11:35.156 "unmap": true, 00:11:35.156 "flush": true, 00:11:35.156 "reset": true, 00:11:35.156 "nvme_admin": false, 00:11:35.156 "nvme_io": false, 00:11:35.156 "nvme_io_md": false, 00:11:35.156 "write_zeroes": true, 00:11:35.156 "zcopy": true, 00:11:35.156 "get_zone_info": false, 00:11:35.156 "zone_management": false, 00:11:35.156 "zone_append": false, 00:11:35.156 "compare": false, 00:11:35.156 "compare_and_write": false, 00:11:35.156 "abort": true, 00:11:35.156 "seek_hole": false, 00:11:35.156 "seek_data": false, 00:11:35.156 "copy": true, 00:11:35.156 "nvme_iov_md": false 00:11:35.156 }, 00:11:35.156 "memory_domains": [ 00:11:35.156 { 00:11:35.156 "dma_device_id": "system", 00:11:35.156 "dma_device_type": 1 00:11:35.156 }, 00:11:35.156 { 00:11:35.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.156 "dma_device_type": 2 00:11:35.156 } 00:11:35.156 ], 00:11:35.156 "driver_specific": {} 00:11:35.156 } 00:11:35.156 ] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.156 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.157 [2024-11-17 01:31:43.516245] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.157 [2024-11-17 01:31:43.516288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.157 [2024-11-17 01:31:43.516311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.157 [2024-11-17 01:31:43.518093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.157 [2024-11-17 01:31:43.518149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.157 "name": "Existed_Raid", 00:11:35.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.157 "strip_size_kb": 64, 00:11:35.157 "state": "configuring", 00:11:35.157 "raid_level": "concat", 00:11:35.157 "superblock": false, 00:11:35.157 "num_base_bdevs": 4, 00:11:35.157 "num_base_bdevs_discovered": 3, 00:11:35.157 "num_base_bdevs_operational": 4, 00:11:35.157 "base_bdevs_list": [ 00:11:35.157 { 00:11:35.157 "name": "BaseBdev1", 00:11:35.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.157 "is_configured": false, 00:11:35.157 "data_offset": 0, 00:11:35.157 "data_size": 0 00:11:35.157 }, 00:11:35.157 { 00:11:35.157 "name": "BaseBdev2", 00:11:35.157 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:35.157 "is_configured": true, 00:11:35.157 "data_offset": 0, 00:11:35.157 "data_size": 65536 00:11:35.157 }, 00:11:35.157 { 00:11:35.157 "name": "BaseBdev3", 00:11:35.157 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:35.157 "is_configured": true, 00:11:35.157 "data_offset": 0, 00:11:35.157 "data_size": 65536 00:11:35.157 }, 00:11:35.157 { 00:11:35.157 "name": "BaseBdev4", 00:11:35.157 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:35.157 "is_configured": true, 00:11:35.157 "data_offset": 0, 00:11:35.157 "data_size": 65536 00:11:35.157 } 00:11:35.157 ] 00:11:35.157 }' 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.157 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.727 [2024-11-17 01:31:43.939539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.727 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.728 "name": "Existed_Raid", 00:11:35.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.728 "strip_size_kb": 64, 00:11:35.728 "state": "configuring", 00:11:35.728 "raid_level": "concat", 00:11:35.728 "superblock": false, 00:11:35.728 "num_base_bdevs": 4, 00:11:35.728 "num_base_bdevs_discovered": 2, 00:11:35.728 "num_base_bdevs_operational": 4, 00:11:35.728 "base_bdevs_list": [ 00:11:35.728 { 00:11:35.728 "name": "BaseBdev1", 00:11:35.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.728 "is_configured": false, 00:11:35.728 "data_offset": 0, 00:11:35.728 "data_size": 0 00:11:35.728 }, 00:11:35.728 { 00:11:35.728 "name": null, 00:11:35.728 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:35.728 "is_configured": false, 00:11:35.728 "data_offset": 0, 00:11:35.728 "data_size": 65536 00:11:35.728 }, 00:11:35.728 { 00:11:35.728 "name": "BaseBdev3", 00:11:35.728 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:35.728 "is_configured": true, 00:11:35.728 "data_offset": 0, 00:11:35.728 "data_size": 65536 00:11:35.728 }, 00:11:35.728 { 00:11:35.728 "name": "BaseBdev4", 00:11:35.728 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:35.728 "is_configured": true, 00:11:35.728 "data_offset": 0, 00:11:35.728 "data_size": 65536 00:11:35.728 } 00:11:35.728 ] 00:11:35.728 }' 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.728 01:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.988 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.248 [2024-11-17 01:31:44.450838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.248 BaseBdev1 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.248 [ 00:11:36.248 { 00:11:36.248 "name": "BaseBdev1", 00:11:36.248 "aliases": [ 00:11:36.248 "923440c2-4da2-48bf-97da-0d63dc46a98e" 00:11:36.248 ], 00:11:36.248 "product_name": "Malloc disk", 00:11:36.248 "block_size": 512, 00:11:36.248 "num_blocks": 65536, 00:11:36.248 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:36.248 "assigned_rate_limits": { 00:11:36.248 "rw_ios_per_sec": 0, 00:11:36.248 "rw_mbytes_per_sec": 0, 00:11:36.248 "r_mbytes_per_sec": 0, 00:11:36.248 "w_mbytes_per_sec": 0 00:11:36.248 }, 00:11:36.248 "claimed": true, 00:11:36.248 "claim_type": "exclusive_write", 00:11:36.248 "zoned": false, 00:11:36.248 "supported_io_types": { 00:11:36.248 "read": true, 00:11:36.248 "write": true, 00:11:36.248 "unmap": true, 00:11:36.248 "flush": true, 00:11:36.248 "reset": true, 00:11:36.248 "nvme_admin": false, 00:11:36.248 "nvme_io": false, 00:11:36.248 "nvme_io_md": false, 00:11:36.248 "write_zeroes": true, 00:11:36.248 "zcopy": true, 00:11:36.248 "get_zone_info": false, 00:11:36.248 "zone_management": false, 00:11:36.248 "zone_append": false, 00:11:36.248 "compare": false, 00:11:36.248 "compare_and_write": false, 00:11:36.248 "abort": true, 00:11:36.248 "seek_hole": false, 00:11:36.248 "seek_data": false, 00:11:36.248 "copy": true, 00:11:36.248 "nvme_iov_md": false 00:11:36.248 }, 00:11:36.248 "memory_domains": [ 00:11:36.248 { 00:11:36.248 "dma_device_id": "system", 00:11:36.248 "dma_device_type": 1 00:11:36.248 }, 00:11:36.248 { 00:11:36.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.248 "dma_device_type": 2 00:11:36.248 } 00:11:36.248 ], 00:11:36.248 "driver_specific": {} 00:11:36.248 } 00:11:36.248 ] 00:11:36.248 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.249 "name": "Existed_Raid", 00:11:36.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.249 "strip_size_kb": 64, 00:11:36.249 "state": "configuring", 00:11:36.249 "raid_level": "concat", 00:11:36.249 "superblock": false, 00:11:36.249 "num_base_bdevs": 4, 00:11:36.249 "num_base_bdevs_discovered": 3, 00:11:36.249 "num_base_bdevs_operational": 4, 00:11:36.249 "base_bdevs_list": [ 00:11:36.249 { 00:11:36.249 "name": "BaseBdev1", 00:11:36.249 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:36.249 "is_configured": true, 00:11:36.249 "data_offset": 0, 00:11:36.249 "data_size": 65536 00:11:36.249 }, 00:11:36.249 { 00:11:36.249 "name": null, 00:11:36.249 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:36.249 "is_configured": false, 00:11:36.249 "data_offset": 0, 00:11:36.249 "data_size": 65536 00:11:36.249 }, 00:11:36.249 { 00:11:36.249 "name": "BaseBdev3", 00:11:36.249 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:36.249 "is_configured": true, 00:11:36.249 "data_offset": 0, 00:11:36.249 "data_size": 65536 00:11:36.249 }, 00:11:36.249 { 00:11:36.249 "name": "BaseBdev4", 00:11:36.249 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:36.249 "is_configured": true, 00:11:36.249 "data_offset": 0, 00:11:36.249 "data_size": 65536 00:11:36.249 } 00:11:36.249 ] 00:11:36.249 }' 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.249 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.509 [2024-11-17 01:31:44.938012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.509 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.769 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.769 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.769 "name": "Existed_Raid", 00:11:36.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.769 "strip_size_kb": 64, 00:11:36.769 "state": "configuring", 00:11:36.769 "raid_level": "concat", 00:11:36.769 "superblock": false, 00:11:36.769 "num_base_bdevs": 4, 00:11:36.769 "num_base_bdevs_discovered": 2, 00:11:36.769 "num_base_bdevs_operational": 4, 00:11:36.769 "base_bdevs_list": [ 00:11:36.769 { 00:11:36.769 "name": "BaseBdev1", 00:11:36.769 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:36.769 "is_configured": true, 00:11:36.769 "data_offset": 0, 00:11:36.769 "data_size": 65536 00:11:36.769 }, 00:11:36.769 { 00:11:36.769 "name": null, 00:11:36.769 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:36.769 "is_configured": false, 00:11:36.769 "data_offset": 0, 00:11:36.769 "data_size": 65536 00:11:36.769 }, 00:11:36.769 { 00:11:36.769 "name": null, 00:11:36.769 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:36.769 "is_configured": false, 00:11:36.769 "data_offset": 0, 00:11:36.769 "data_size": 65536 00:11:36.769 }, 00:11:36.769 { 00:11:36.769 "name": "BaseBdev4", 00:11:36.769 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:36.769 "is_configured": true, 00:11:36.769 "data_offset": 0, 00:11:36.769 "data_size": 65536 00:11:36.769 } 00:11:36.769 ] 00:11:36.769 }' 00:11:36.769 01:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.769 01:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.029 [2024-11-17 01:31:45.425181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.029 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.029 "name": "Existed_Raid", 00:11:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.029 "strip_size_kb": 64, 00:11:37.029 "state": "configuring", 00:11:37.029 "raid_level": "concat", 00:11:37.029 "superblock": false, 00:11:37.029 "num_base_bdevs": 4, 00:11:37.029 "num_base_bdevs_discovered": 3, 00:11:37.029 "num_base_bdevs_operational": 4, 00:11:37.029 "base_bdevs_list": [ 00:11:37.029 { 00:11:37.029 "name": "BaseBdev1", 00:11:37.029 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:37.029 "is_configured": true, 00:11:37.029 "data_offset": 0, 00:11:37.029 "data_size": 65536 00:11:37.029 }, 00:11:37.029 { 00:11:37.029 "name": null, 00:11:37.029 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:37.029 "is_configured": false, 00:11:37.029 "data_offset": 0, 00:11:37.029 "data_size": 65536 00:11:37.029 }, 00:11:37.029 { 00:11:37.029 "name": "BaseBdev3", 00:11:37.029 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:37.029 "is_configured": true, 00:11:37.029 "data_offset": 0, 00:11:37.029 "data_size": 65536 00:11:37.029 }, 00:11:37.029 { 00:11:37.029 "name": "BaseBdev4", 00:11:37.029 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:37.029 "is_configured": true, 00:11:37.029 "data_offset": 0, 00:11:37.030 "data_size": 65536 00:11:37.030 } 00:11:37.030 ] 00:11:37.030 }' 00:11:37.030 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.030 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.599 [2024-11-17 01:31:45.904386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.599 01:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.599 "name": "Existed_Raid", 00:11:37.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.599 "strip_size_kb": 64, 00:11:37.599 "state": "configuring", 00:11:37.599 "raid_level": "concat", 00:11:37.599 "superblock": false, 00:11:37.599 "num_base_bdevs": 4, 00:11:37.599 "num_base_bdevs_discovered": 2, 00:11:37.599 "num_base_bdevs_operational": 4, 00:11:37.599 "base_bdevs_list": [ 00:11:37.599 { 00:11:37.599 "name": null, 00:11:37.599 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:37.599 "is_configured": false, 00:11:37.599 "data_offset": 0, 00:11:37.599 "data_size": 65536 00:11:37.599 }, 00:11:37.599 { 00:11:37.599 "name": null, 00:11:37.599 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:37.599 "is_configured": false, 00:11:37.599 "data_offset": 0, 00:11:37.599 "data_size": 65536 00:11:37.599 }, 00:11:37.599 { 00:11:37.599 "name": "BaseBdev3", 00:11:37.599 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:37.599 "is_configured": true, 00:11:37.599 "data_offset": 0, 00:11:37.599 "data_size": 65536 00:11:37.599 }, 00:11:37.599 { 00:11:37.599 "name": "BaseBdev4", 00:11:37.599 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:37.599 "is_configured": true, 00:11:37.599 "data_offset": 0, 00:11:37.599 "data_size": 65536 00:11:37.599 } 00:11:37.599 ] 00:11:37.599 }' 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.599 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.167 [2024-11-17 01:31:46.475463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.167 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.168 "name": "Existed_Raid", 00:11:38.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.168 "strip_size_kb": 64, 00:11:38.168 "state": "configuring", 00:11:38.168 "raid_level": "concat", 00:11:38.168 "superblock": false, 00:11:38.168 "num_base_bdevs": 4, 00:11:38.168 "num_base_bdevs_discovered": 3, 00:11:38.168 "num_base_bdevs_operational": 4, 00:11:38.168 "base_bdevs_list": [ 00:11:38.168 { 00:11:38.168 "name": null, 00:11:38.168 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:38.168 "is_configured": false, 00:11:38.168 "data_offset": 0, 00:11:38.168 "data_size": 65536 00:11:38.168 }, 00:11:38.168 { 00:11:38.168 "name": "BaseBdev2", 00:11:38.168 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:38.168 "is_configured": true, 00:11:38.168 "data_offset": 0, 00:11:38.168 "data_size": 65536 00:11:38.168 }, 00:11:38.168 { 00:11:38.168 "name": "BaseBdev3", 00:11:38.168 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:38.168 "is_configured": true, 00:11:38.168 "data_offset": 0, 00:11:38.168 "data_size": 65536 00:11:38.168 }, 00:11:38.168 { 00:11:38.168 "name": "BaseBdev4", 00:11:38.168 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:38.168 "is_configured": true, 00:11:38.168 "data_offset": 0, 00:11:38.168 "data_size": 65536 00:11:38.168 } 00:11:38.168 ] 00:11:38.168 }' 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.168 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.445 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.445 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.445 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.445 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 923440c2-4da2-48bf-97da-0d63dc46a98e 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.705 01:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.705 [2024-11-17 01:31:47.022944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:38.705 [2024-11-17 01:31:47.022992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:38.705 [2024-11-17 01:31:47.022999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:38.705 [2024-11-17 01:31:47.023251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:38.705 [2024-11-17 01:31:47.023392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:38.705 [2024-11-17 01:31:47.023416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:38.705 [2024-11-17 01:31:47.023685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.705 NewBaseBdev 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.705 [ 00:11:38.705 { 00:11:38.705 "name": "NewBaseBdev", 00:11:38.705 "aliases": [ 00:11:38.705 "923440c2-4da2-48bf-97da-0d63dc46a98e" 00:11:38.705 ], 00:11:38.705 "product_name": "Malloc disk", 00:11:38.705 "block_size": 512, 00:11:38.705 "num_blocks": 65536, 00:11:38.705 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:38.705 "assigned_rate_limits": { 00:11:38.705 "rw_ios_per_sec": 0, 00:11:38.705 "rw_mbytes_per_sec": 0, 00:11:38.705 "r_mbytes_per_sec": 0, 00:11:38.705 "w_mbytes_per_sec": 0 00:11:38.705 }, 00:11:38.705 "claimed": true, 00:11:38.705 "claim_type": "exclusive_write", 00:11:38.705 "zoned": false, 00:11:38.705 "supported_io_types": { 00:11:38.705 "read": true, 00:11:38.705 "write": true, 00:11:38.705 "unmap": true, 00:11:38.705 "flush": true, 00:11:38.705 "reset": true, 00:11:38.705 "nvme_admin": false, 00:11:38.705 "nvme_io": false, 00:11:38.705 "nvme_io_md": false, 00:11:38.705 "write_zeroes": true, 00:11:38.705 "zcopy": true, 00:11:38.705 "get_zone_info": false, 00:11:38.705 "zone_management": false, 00:11:38.705 "zone_append": false, 00:11:38.705 "compare": false, 00:11:38.705 "compare_and_write": false, 00:11:38.705 "abort": true, 00:11:38.705 "seek_hole": false, 00:11:38.705 "seek_data": false, 00:11:38.705 "copy": true, 00:11:38.705 "nvme_iov_md": false 00:11:38.705 }, 00:11:38.705 "memory_domains": [ 00:11:38.705 { 00:11:38.705 "dma_device_id": "system", 00:11:38.705 "dma_device_type": 1 00:11:38.705 }, 00:11:38.705 { 00:11:38.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.705 "dma_device_type": 2 00:11:38.705 } 00:11:38.705 ], 00:11:38.705 "driver_specific": {} 00:11:38.705 } 00:11:38.705 ] 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.705 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.705 "name": "Existed_Raid", 00:11:38.705 "uuid": "77f12ce3-899f-4b76-a239-6c3b18438308", 00:11:38.705 "strip_size_kb": 64, 00:11:38.705 "state": "online", 00:11:38.705 "raid_level": "concat", 00:11:38.705 "superblock": false, 00:11:38.705 "num_base_bdevs": 4, 00:11:38.705 "num_base_bdevs_discovered": 4, 00:11:38.705 "num_base_bdevs_operational": 4, 00:11:38.705 "base_bdevs_list": [ 00:11:38.705 { 00:11:38.705 "name": "NewBaseBdev", 00:11:38.705 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:38.705 "is_configured": true, 00:11:38.705 "data_offset": 0, 00:11:38.706 "data_size": 65536 00:11:38.706 }, 00:11:38.706 { 00:11:38.706 "name": "BaseBdev2", 00:11:38.706 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:38.706 "is_configured": true, 00:11:38.706 "data_offset": 0, 00:11:38.706 "data_size": 65536 00:11:38.706 }, 00:11:38.706 { 00:11:38.706 "name": "BaseBdev3", 00:11:38.706 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:38.706 "is_configured": true, 00:11:38.706 "data_offset": 0, 00:11:38.706 "data_size": 65536 00:11:38.706 }, 00:11:38.706 { 00:11:38.706 "name": "BaseBdev4", 00:11:38.706 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:38.706 "is_configured": true, 00:11:38.706 "data_offset": 0, 00:11:38.706 "data_size": 65536 00:11:38.706 } 00:11:38.706 ] 00:11:38.706 }' 00:11:38.706 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.706 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.275 [2024-11-17 01:31:47.494502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.275 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.275 "name": "Existed_Raid", 00:11:39.275 "aliases": [ 00:11:39.275 "77f12ce3-899f-4b76-a239-6c3b18438308" 00:11:39.275 ], 00:11:39.275 "product_name": "Raid Volume", 00:11:39.275 "block_size": 512, 00:11:39.275 "num_blocks": 262144, 00:11:39.275 "uuid": "77f12ce3-899f-4b76-a239-6c3b18438308", 00:11:39.275 "assigned_rate_limits": { 00:11:39.275 "rw_ios_per_sec": 0, 00:11:39.275 "rw_mbytes_per_sec": 0, 00:11:39.275 "r_mbytes_per_sec": 0, 00:11:39.275 "w_mbytes_per_sec": 0 00:11:39.275 }, 00:11:39.275 "claimed": false, 00:11:39.275 "zoned": false, 00:11:39.276 "supported_io_types": { 00:11:39.276 "read": true, 00:11:39.276 "write": true, 00:11:39.276 "unmap": true, 00:11:39.276 "flush": true, 00:11:39.276 "reset": true, 00:11:39.276 "nvme_admin": false, 00:11:39.276 "nvme_io": false, 00:11:39.276 "nvme_io_md": false, 00:11:39.276 "write_zeroes": true, 00:11:39.276 "zcopy": false, 00:11:39.276 "get_zone_info": false, 00:11:39.276 "zone_management": false, 00:11:39.276 "zone_append": false, 00:11:39.276 "compare": false, 00:11:39.276 "compare_and_write": false, 00:11:39.276 "abort": false, 00:11:39.276 "seek_hole": false, 00:11:39.276 "seek_data": false, 00:11:39.276 "copy": false, 00:11:39.276 "nvme_iov_md": false 00:11:39.276 }, 00:11:39.276 "memory_domains": [ 00:11:39.276 { 00:11:39.276 "dma_device_id": "system", 00:11:39.276 "dma_device_type": 1 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.276 "dma_device_type": 2 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "dma_device_id": "system", 00:11:39.276 "dma_device_type": 1 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.276 "dma_device_type": 2 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "dma_device_id": "system", 00:11:39.276 "dma_device_type": 1 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.276 "dma_device_type": 2 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "dma_device_id": "system", 00:11:39.276 "dma_device_type": 1 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.276 "dma_device_type": 2 00:11:39.276 } 00:11:39.276 ], 00:11:39.276 "driver_specific": { 00:11:39.276 "raid": { 00:11:39.276 "uuid": "77f12ce3-899f-4b76-a239-6c3b18438308", 00:11:39.276 "strip_size_kb": 64, 00:11:39.276 "state": "online", 00:11:39.276 "raid_level": "concat", 00:11:39.276 "superblock": false, 00:11:39.276 "num_base_bdevs": 4, 00:11:39.276 "num_base_bdevs_discovered": 4, 00:11:39.276 "num_base_bdevs_operational": 4, 00:11:39.276 "base_bdevs_list": [ 00:11:39.276 { 00:11:39.276 "name": "NewBaseBdev", 00:11:39.276 "uuid": "923440c2-4da2-48bf-97da-0d63dc46a98e", 00:11:39.276 "is_configured": true, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "name": "BaseBdev2", 00:11:39.276 "uuid": "77111af3-f42b-4b38-a15d-f3ba7dc25244", 00:11:39.276 "is_configured": true, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "name": "BaseBdev3", 00:11:39.276 "uuid": "cc5189f0-5a03-4d84-a46e-96f413b57396", 00:11:39.276 "is_configured": true, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 }, 00:11:39.276 { 00:11:39.276 "name": "BaseBdev4", 00:11:39.276 "uuid": "cde3349b-ae16-4c55-a53e-ef13c6c26455", 00:11:39.276 "is_configured": true, 00:11:39.276 "data_offset": 0, 00:11:39.276 "data_size": 65536 00:11:39.276 } 00:11:39.276 ] 00:11:39.276 } 00:11:39.276 } 00:11:39.276 }' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:39.276 BaseBdev2 00:11:39.276 BaseBdev3 00:11:39.276 BaseBdev4' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.276 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.535 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.535 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.536 [2024-11-17 01:31:47.817618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:39.536 [2024-11-17 01:31:47.817647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.536 [2024-11-17 01:31:47.817717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.536 [2024-11-17 01:31:47.817793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.536 [2024-11-17 01:31:47.817803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71059 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71059 ']' 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71059 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71059 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.536 killing process with pid 71059 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71059' 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71059 00:11:39.536 [2024-11-17 01:31:47.871531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.536 01:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71059 00:11:40.104 [2024-11-17 01:31:48.257096] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:41.042 00:11:41.042 real 0m11.324s 00:11:41.042 user 0m17.997s 00:11:41.042 sys 0m2.075s 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.042 ************************************ 00:11:41.042 END TEST raid_state_function_test 00:11:41.042 ************************************ 00:11:41.042 01:31:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:41.042 01:31:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:41.042 01:31:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.042 01:31:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.042 ************************************ 00:11:41.042 START TEST raid_state_function_test_sb 00:11:41.042 ************************************ 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71725 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71725' 00:11:41.042 Process raid pid: 71725 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71725 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71725 ']' 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.042 01:31:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.301 [2024-11-17 01:31:49.503932] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:41.301 [2024-11-17 01:31:49.504053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.301 [2024-11-17 01:31:49.678333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.560 [2024-11-17 01:31:49.793044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.560 [2024-11-17 01:31:49.990440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.560 [2024-11-17 01:31:49.990477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.128 [2024-11-17 01:31:50.343585] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.128 [2024-11-17 01:31:50.343707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.128 [2024-11-17 01:31:50.343721] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.128 [2024-11-17 01:31:50.343731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.128 [2024-11-17 01:31:50.343737] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.128 [2024-11-17 01:31:50.343745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.128 [2024-11-17 01:31:50.343751] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.128 [2024-11-17 01:31:50.343773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.128 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.128 "name": "Existed_Raid", 00:11:42.128 "uuid": "b0c52fd5-43cb-4386-ada0-de191b4ecaec", 00:11:42.128 "strip_size_kb": 64, 00:11:42.128 "state": "configuring", 00:11:42.128 "raid_level": "concat", 00:11:42.128 "superblock": true, 00:11:42.128 "num_base_bdevs": 4, 00:11:42.128 "num_base_bdevs_discovered": 0, 00:11:42.128 "num_base_bdevs_operational": 4, 00:11:42.128 "base_bdevs_list": [ 00:11:42.128 { 00:11:42.128 "name": "BaseBdev1", 00:11:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.128 "is_configured": false, 00:11:42.128 "data_offset": 0, 00:11:42.128 "data_size": 0 00:11:42.128 }, 00:11:42.128 { 00:11:42.128 "name": "BaseBdev2", 00:11:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.128 "is_configured": false, 00:11:42.128 "data_offset": 0, 00:11:42.128 "data_size": 0 00:11:42.128 }, 00:11:42.128 { 00:11:42.128 "name": "BaseBdev3", 00:11:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.128 "is_configured": false, 00:11:42.128 "data_offset": 0, 00:11:42.128 "data_size": 0 00:11:42.128 }, 00:11:42.128 { 00:11:42.129 "name": "BaseBdev4", 00:11:42.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.129 "is_configured": false, 00:11:42.129 "data_offset": 0, 00:11:42.129 "data_size": 0 00:11:42.129 } 00:11:42.129 ] 00:11:42.129 }' 00:11:42.129 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.129 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.388 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.388 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.388 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.388 [2024-11-17 01:31:50.838673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.388 [2024-11-17 01:31:50.838771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:42.388 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.388 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.388 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.388 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.388 [2024-11-17 01:31:50.846664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.388 [2024-11-17 01:31:50.846738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.388 [2024-11-17 01:31:50.846798] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.388 [2024-11-17 01:31:50.846823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.388 [2024-11-17 01:31:50.846842] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.388 [2024-11-17 01:31:50.846864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.388 [2024-11-17 01:31:50.846882] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.388 [2024-11-17 01:31:50.846912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.651 [2024-11-17 01:31:50.890913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.651 BaseBdev1 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.651 [ 00:11:42.651 { 00:11:42.651 "name": "BaseBdev1", 00:11:42.651 "aliases": [ 00:11:42.651 "7db4cf98-a63c-4b5a-92bf-390e15b94ca9" 00:11:42.651 ], 00:11:42.651 "product_name": "Malloc disk", 00:11:42.651 "block_size": 512, 00:11:42.651 "num_blocks": 65536, 00:11:42.651 "uuid": "7db4cf98-a63c-4b5a-92bf-390e15b94ca9", 00:11:42.651 "assigned_rate_limits": { 00:11:42.651 "rw_ios_per_sec": 0, 00:11:42.651 "rw_mbytes_per_sec": 0, 00:11:42.651 "r_mbytes_per_sec": 0, 00:11:42.651 "w_mbytes_per_sec": 0 00:11:42.651 }, 00:11:42.651 "claimed": true, 00:11:42.651 "claim_type": "exclusive_write", 00:11:42.651 "zoned": false, 00:11:42.651 "supported_io_types": { 00:11:42.651 "read": true, 00:11:42.651 "write": true, 00:11:42.651 "unmap": true, 00:11:42.651 "flush": true, 00:11:42.651 "reset": true, 00:11:42.651 "nvme_admin": false, 00:11:42.651 "nvme_io": false, 00:11:42.651 "nvme_io_md": false, 00:11:42.651 "write_zeroes": true, 00:11:42.651 "zcopy": true, 00:11:42.651 "get_zone_info": false, 00:11:42.651 "zone_management": false, 00:11:42.651 "zone_append": false, 00:11:42.651 "compare": false, 00:11:42.651 "compare_and_write": false, 00:11:42.651 "abort": true, 00:11:42.651 "seek_hole": false, 00:11:42.651 "seek_data": false, 00:11:42.651 "copy": true, 00:11:42.651 "nvme_iov_md": false 00:11:42.651 }, 00:11:42.651 "memory_domains": [ 00:11:42.651 { 00:11:42.651 "dma_device_id": "system", 00:11:42.651 "dma_device_type": 1 00:11:42.651 }, 00:11:42.651 { 00:11:42.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.651 "dma_device_type": 2 00:11:42.651 } 00:11:42.651 ], 00:11:42.651 "driver_specific": {} 00:11:42.651 } 00:11:42.651 ] 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.651 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.652 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.652 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.652 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.652 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.652 "name": "Existed_Raid", 00:11:42.652 "uuid": "093bbdc6-cc5e-4951-9180-84ea4a01de27", 00:11:42.652 "strip_size_kb": 64, 00:11:42.652 "state": "configuring", 00:11:42.652 "raid_level": "concat", 00:11:42.652 "superblock": true, 00:11:42.652 "num_base_bdevs": 4, 00:11:42.652 "num_base_bdevs_discovered": 1, 00:11:42.652 "num_base_bdevs_operational": 4, 00:11:42.652 "base_bdevs_list": [ 00:11:42.652 { 00:11:42.652 "name": "BaseBdev1", 00:11:42.652 "uuid": "7db4cf98-a63c-4b5a-92bf-390e15b94ca9", 00:11:42.652 "is_configured": true, 00:11:42.652 "data_offset": 2048, 00:11:42.652 "data_size": 63488 00:11:42.652 }, 00:11:42.652 { 00:11:42.652 "name": "BaseBdev2", 00:11:42.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.652 "is_configured": false, 00:11:42.652 "data_offset": 0, 00:11:42.652 "data_size": 0 00:11:42.652 }, 00:11:42.652 { 00:11:42.652 "name": "BaseBdev3", 00:11:42.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.652 "is_configured": false, 00:11:42.652 "data_offset": 0, 00:11:42.652 "data_size": 0 00:11:42.652 }, 00:11:42.652 { 00:11:42.652 "name": "BaseBdev4", 00:11:42.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.652 "is_configured": false, 00:11:42.652 "data_offset": 0, 00:11:42.652 "data_size": 0 00:11:42.652 } 00:11:42.652 ] 00:11:42.652 }' 00:11:42.652 01:31:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.652 01:31:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.911 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.911 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.911 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.911 [2024-11-17 01:31:51.362143] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.911 [2024-11-17 01:31:51.362259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:42.911 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.911 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.911 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.911 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.171 [2024-11-17 01:31:51.374210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.171 [2024-11-17 01:31:51.376098] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.171 [2024-11-17 01:31:51.376142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.171 [2024-11-17 01:31:51.376152] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.171 [2024-11-17 01:31:51.376163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.171 [2024-11-17 01:31:51.376169] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.171 [2024-11-17 01:31:51.376178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.171 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.171 "name": "Existed_Raid", 00:11:43.171 "uuid": "19d960e2-1a81-493b-a837-06a3416e4024", 00:11:43.171 "strip_size_kb": 64, 00:11:43.171 "state": "configuring", 00:11:43.171 "raid_level": "concat", 00:11:43.171 "superblock": true, 00:11:43.171 "num_base_bdevs": 4, 00:11:43.171 "num_base_bdevs_discovered": 1, 00:11:43.171 "num_base_bdevs_operational": 4, 00:11:43.171 "base_bdevs_list": [ 00:11:43.171 { 00:11:43.171 "name": "BaseBdev1", 00:11:43.171 "uuid": "7db4cf98-a63c-4b5a-92bf-390e15b94ca9", 00:11:43.171 "is_configured": true, 00:11:43.171 "data_offset": 2048, 00:11:43.171 "data_size": 63488 00:11:43.171 }, 00:11:43.171 { 00:11:43.171 "name": "BaseBdev2", 00:11:43.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.171 "is_configured": false, 00:11:43.171 "data_offset": 0, 00:11:43.171 "data_size": 0 00:11:43.171 }, 00:11:43.171 { 00:11:43.171 "name": "BaseBdev3", 00:11:43.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.171 "is_configured": false, 00:11:43.171 "data_offset": 0, 00:11:43.171 "data_size": 0 00:11:43.171 }, 00:11:43.171 { 00:11:43.172 "name": "BaseBdev4", 00:11:43.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.172 "is_configured": false, 00:11:43.172 "data_offset": 0, 00:11:43.172 "data_size": 0 00:11:43.172 } 00:11:43.172 ] 00:11:43.172 }' 00:11:43.172 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.172 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.431 [2024-11-17 01:31:51.870912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.431 BaseBdev2 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.431 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.691 [ 00:11:43.691 { 00:11:43.691 "name": "BaseBdev2", 00:11:43.691 "aliases": [ 00:11:43.691 "8343150e-bcac-43b3-a39f-805f51bb6d29" 00:11:43.691 ], 00:11:43.691 "product_name": "Malloc disk", 00:11:43.691 "block_size": 512, 00:11:43.691 "num_blocks": 65536, 00:11:43.691 "uuid": "8343150e-bcac-43b3-a39f-805f51bb6d29", 00:11:43.691 "assigned_rate_limits": { 00:11:43.691 "rw_ios_per_sec": 0, 00:11:43.691 "rw_mbytes_per_sec": 0, 00:11:43.691 "r_mbytes_per_sec": 0, 00:11:43.691 "w_mbytes_per_sec": 0 00:11:43.691 }, 00:11:43.691 "claimed": true, 00:11:43.691 "claim_type": "exclusive_write", 00:11:43.691 "zoned": false, 00:11:43.691 "supported_io_types": { 00:11:43.691 "read": true, 00:11:43.691 "write": true, 00:11:43.691 "unmap": true, 00:11:43.691 "flush": true, 00:11:43.691 "reset": true, 00:11:43.691 "nvme_admin": false, 00:11:43.691 "nvme_io": false, 00:11:43.691 "nvme_io_md": false, 00:11:43.691 "write_zeroes": true, 00:11:43.691 "zcopy": true, 00:11:43.691 "get_zone_info": false, 00:11:43.691 "zone_management": false, 00:11:43.691 "zone_append": false, 00:11:43.691 "compare": false, 00:11:43.691 "compare_and_write": false, 00:11:43.691 "abort": true, 00:11:43.691 "seek_hole": false, 00:11:43.691 "seek_data": false, 00:11:43.691 "copy": true, 00:11:43.691 "nvme_iov_md": false 00:11:43.691 }, 00:11:43.691 "memory_domains": [ 00:11:43.691 { 00:11:43.691 "dma_device_id": "system", 00:11:43.691 "dma_device_type": 1 00:11:43.691 }, 00:11:43.691 { 00:11:43.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.691 "dma_device_type": 2 00:11:43.691 } 00:11:43.691 ], 00:11:43.691 "driver_specific": {} 00:11:43.691 } 00:11:43.691 ] 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.691 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.692 "name": "Existed_Raid", 00:11:43.692 "uuid": "19d960e2-1a81-493b-a837-06a3416e4024", 00:11:43.692 "strip_size_kb": 64, 00:11:43.692 "state": "configuring", 00:11:43.692 "raid_level": "concat", 00:11:43.692 "superblock": true, 00:11:43.692 "num_base_bdevs": 4, 00:11:43.692 "num_base_bdevs_discovered": 2, 00:11:43.692 "num_base_bdevs_operational": 4, 00:11:43.692 "base_bdevs_list": [ 00:11:43.692 { 00:11:43.692 "name": "BaseBdev1", 00:11:43.692 "uuid": "7db4cf98-a63c-4b5a-92bf-390e15b94ca9", 00:11:43.692 "is_configured": true, 00:11:43.692 "data_offset": 2048, 00:11:43.692 "data_size": 63488 00:11:43.692 }, 00:11:43.692 { 00:11:43.692 "name": "BaseBdev2", 00:11:43.692 "uuid": "8343150e-bcac-43b3-a39f-805f51bb6d29", 00:11:43.692 "is_configured": true, 00:11:43.692 "data_offset": 2048, 00:11:43.692 "data_size": 63488 00:11:43.692 }, 00:11:43.692 { 00:11:43.692 "name": "BaseBdev3", 00:11:43.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.692 "is_configured": false, 00:11:43.692 "data_offset": 0, 00:11:43.692 "data_size": 0 00:11:43.692 }, 00:11:43.692 { 00:11:43.692 "name": "BaseBdev4", 00:11:43.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.692 "is_configured": false, 00:11:43.692 "data_offset": 0, 00:11:43.692 "data_size": 0 00:11:43.692 } 00:11:43.692 ] 00:11:43.692 }' 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.692 01:31:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.951 [2024-11-17 01:31:52.335229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.951 BaseBdev3 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.951 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.952 [ 00:11:43.952 { 00:11:43.952 "name": "BaseBdev3", 00:11:43.952 "aliases": [ 00:11:43.952 "db081350-6580-47cc-bf1b-bb2cb49ab427" 00:11:43.952 ], 00:11:43.952 "product_name": "Malloc disk", 00:11:43.952 "block_size": 512, 00:11:43.952 "num_blocks": 65536, 00:11:43.952 "uuid": "db081350-6580-47cc-bf1b-bb2cb49ab427", 00:11:43.952 "assigned_rate_limits": { 00:11:43.952 "rw_ios_per_sec": 0, 00:11:43.952 "rw_mbytes_per_sec": 0, 00:11:43.952 "r_mbytes_per_sec": 0, 00:11:43.952 "w_mbytes_per_sec": 0 00:11:43.952 }, 00:11:43.952 "claimed": true, 00:11:43.952 "claim_type": "exclusive_write", 00:11:43.952 "zoned": false, 00:11:43.952 "supported_io_types": { 00:11:43.952 "read": true, 00:11:43.952 "write": true, 00:11:43.952 "unmap": true, 00:11:43.952 "flush": true, 00:11:43.952 "reset": true, 00:11:43.952 "nvme_admin": false, 00:11:43.952 "nvme_io": false, 00:11:43.952 "nvme_io_md": false, 00:11:43.952 "write_zeroes": true, 00:11:43.952 "zcopy": true, 00:11:43.952 "get_zone_info": false, 00:11:43.952 "zone_management": false, 00:11:43.952 "zone_append": false, 00:11:43.952 "compare": false, 00:11:43.952 "compare_and_write": false, 00:11:43.952 "abort": true, 00:11:43.952 "seek_hole": false, 00:11:43.952 "seek_data": false, 00:11:43.952 "copy": true, 00:11:43.952 "nvme_iov_md": false 00:11:43.952 }, 00:11:43.952 "memory_domains": [ 00:11:43.952 { 00:11:43.952 "dma_device_id": "system", 00:11:43.952 "dma_device_type": 1 00:11:43.952 }, 00:11:43.952 { 00:11:43.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.952 "dma_device_type": 2 00:11:43.952 } 00:11:43.952 ], 00:11:43.952 "driver_specific": {} 00:11:43.952 } 00:11:43.952 ] 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.952 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.210 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.210 "name": "Existed_Raid", 00:11:44.210 "uuid": "19d960e2-1a81-493b-a837-06a3416e4024", 00:11:44.210 "strip_size_kb": 64, 00:11:44.210 "state": "configuring", 00:11:44.210 "raid_level": "concat", 00:11:44.210 "superblock": true, 00:11:44.210 "num_base_bdevs": 4, 00:11:44.210 "num_base_bdevs_discovered": 3, 00:11:44.210 "num_base_bdevs_operational": 4, 00:11:44.210 "base_bdevs_list": [ 00:11:44.210 { 00:11:44.210 "name": "BaseBdev1", 00:11:44.210 "uuid": "7db4cf98-a63c-4b5a-92bf-390e15b94ca9", 00:11:44.210 "is_configured": true, 00:11:44.210 "data_offset": 2048, 00:11:44.210 "data_size": 63488 00:11:44.210 }, 00:11:44.210 { 00:11:44.210 "name": "BaseBdev2", 00:11:44.210 "uuid": "8343150e-bcac-43b3-a39f-805f51bb6d29", 00:11:44.210 "is_configured": true, 00:11:44.210 "data_offset": 2048, 00:11:44.210 "data_size": 63488 00:11:44.210 }, 00:11:44.210 { 00:11:44.210 "name": "BaseBdev3", 00:11:44.210 "uuid": "db081350-6580-47cc-bf1b-bb2cb49ab427", 00:11:44.210 "is_configured": true, 00:11:44.210 "data_offset": 2048, 00:11:44.210 "data_size": 63488 00:11:44.210 }, 00:11:44.210 { 00:11:44.210 "name": "BaseBdev4", 00:11:44.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.210 "is_configured": false, 00:11:44.210 "data_offset": 0, 00:11:44.210 "data_size": 0 00:11:44.210 } 00:11:44.210 ] 00:11:44.210 }' 00:11:44.210 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.210 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.468 [2024-11-17 01:31:52.858898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.468 [2024-11-17 01:31:52.859236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:44.468 [2024-11-17 01:31:52.859291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.468 [2024-11-17 01:31:52.859575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:44.468 [2024-11-17 01:31:52.859782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:44.468 [2024-11-17 01:31:52.859833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:44.468 BaseBdev4 00:11:44.468 [2024-11-17 01:31:52.860019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.468 [ 00:11:44.468 { 00:11:44.468 "name": "BaseBdev4", 00:11:44.468 "aliases": [ 00:11:44.468 "1e32d282-d97c-4b54-88de-058dcfc2499e" 00:11:44.468 ], 00:11:44.468 "product_name": "Malloc disk", 00:11:44.468 "block_size": 512, 00:11:44.468 "num_blocks": 65536, 00:11:44.468 "uuid": "1e32d282-d97c-4b54-88de-058dcfc2499e", 00:11:44.468 "assigned_rate_limits": { 00:11:44.468 "rw_ios_per_sec": 0, 00:11:44.468 "rw_mbytes_per_sec": 0, 00:11:44.468 "r_mbytes_per_sec": 0, 00:11:44.468 "w_mbytes_per_sec": 0 00:11:44.468 }, 00:11:44.468 "claimed": true, 00:11:44.468 "claim_type": "exclusive_write", 00:11:44.468 "zoned": false, 00:11:44.468 "supported_io_types": { 00:11:44.468 "read": true, 00:11:44.468 "write": true, 00:11:44.468 "unmap": true, 00:11:44.468 "flush": true, 00:11:44.468 "reset": true, 00:11:44.468 "nvme_admin": false, 00:11:44.468 "nvme_io": false, 00:11:44.468 "nvme_io_md": false, 00:11:44.468 "write_zeroes": true, 00:11:44.468 "zcopy": true, 00:11:44.468 "get_zone_info": false, 00:11:44.468 "zone_management": false, 00:11:44.468 "zone_append": false, 00:11:44.468 "compare": false, 00:11:44.468 "compare_and_write": false, 00:11:44.468 "abort": true, 00:11:44.468 "seek_hole": false, 00:11:44.468 "seek_data": false, 00:11:44.468 "copy": true, 00:11:44.468 "nvme_iov_md": false 00:11:44.468 }, 00:11:44.468 "memory_domains": [ 00:11:44.468 { 00:11:44.468 "dma_device_id": "system", 00:11:44.468 "dma_device_type": 1 00:11:44.468 }, 00:11:44.468 { 00:11:44.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.468 "dma_device_type": 2 00:11:44.468 } 00:11:44.468 ], 00:11:44.468 "driver_specific": {} 00:11:44.468 } 00:11:44.468 ] 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.468 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.726 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.726 "name": "Existed_Raid", 00:11:44.727 "uuid": "19d960e2-1a81-493b-a837-06a3416e4024", 00:11:44.727 "strip_size_kb": 64, 00:11:44.727 "state": "online", 00:11:44.727 "raid_level": "concat", 00:11:44.727 "superblock": true, 00:11:44.727 "num_base_bdevs": 4, 00:11:44.727 "num_base_bdevs_discovered": 4, 00:11:44.727 "num_base_bdevs_operational": 4, 00:11:44.727 "base_bdevs_list": [ 00:11:44.727 { 00:11:44.727 "name": "BaseBdev1", 00:11:44.727 "uuid": "7db4cf98-a63c-4b5a-92bf-390e15b94ca9", 00:11:44.727 "is_configured": true, 00:11:44.727 "data_offset": 2048, 00:11:44.727 "data_size": 63488 00:11:44.727 }, 00:11:44.727 { 00:11:44.727 "name": "BaseBdev2", 00:11:44.727 "uuid": "8343150e-bcac-43b3-a39f-805f51bb6d29", 00:11:44.727 "is_configured": true, 00:11:44.727 "data_offset": 2048, 00:11:44.727 "data_size": 63488 00:11:44.727 }, 00:11:44.727 { 00:11:44.727 "name": "BaseBdev3", 00:11:44.727 "uuid": "db081350-6580-47cc-bf1b-bb2cb49ab427", 00:11:44.727 "is_configured": true, 00:11:44.727 "data_offset": 2048, 00:11:44.727 "data_size": 63488 00:11:44.727 }, 00:11:44.727 { 00:11:44.727 "name": "BaseBdev4", 00:11:44.727 "uuid": "1e32d282-d97c-4b54-88de-058dcfc2499e", 00:11:44.727 "is_configured": true, 00:11:44.727 "data_offset": 2048, 00:11:44.727 "data_size": 63488 00:11:44.727 } 00:11:44.727 ] 00:11:44.727 }' 00:11:44.727 01:31:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.727 01:31:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.005 [2024-11-17 01:31:53.346431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.005 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.005 "name": "Existed_Raid", 00:11:45.005 "aliases": [ 00:11:45.005 "19d960e2-1a81-493b-a837-06a3416e4024" 00:11:45.005 ], 00:11:45.005 "product_name": "Raid Volume", 00:11:45.005 "block_size": 512, 00:11:45.005 "num_blocks": 253952, 00:11:45.005 "uuid": "19d960e2-1a81-493b-a837-06a3416e4024", 00:11:45.005 "assigned_rate_limits": { 00:11:45.005 "rw_ios_per_sec": 0, 00:11:45.005 "rw_mbytes_per_sec": 0, 00:11:45.005 "r_mbytes_per_sec": 0, 00:11:45.005 "w_mbytes_per_sec": 0 00:11:45.005 }, 00:11:45.005 "claimed": false, 00:11:45.005 "zoned": false, 00:11:45.005 "supported_io_types": { 00:11:45.005 "read": true, 00:11:45.005 "write": true, 00:11:45.005 "unmap": true, 00:11:45.005 "flush": true, 00:11:45.005 "reset": true, 00:11:45.005 "nvme_admin": false, 00:11:45.005 "nvme_io": false, 00:11:45.005 "nvme_io_md": false, 00:11:45.005 "write_zeroes": true, 00:11:45.005 "zcopy": false, 00:11:45.005 "get_zone_info": false, 00:11:45.005 "zone_management": false, 00:11:45.005 "zone_append": false, 00:11:45.005 "compare": false, 00:11:45.005 "compare_and_write": false, 00:11:45.005 "abort": false, 00:11:45.005 "seek_hole": false, 00:11:45.005 "seek_data": false, 00:11:45.005 "copy": false, 00:11:45.005 "nvme_iov_md": false 00:11:45.005 }, 00:11:45.005 "memory_domains": [ 00:11:45.005 { 00:11:45.005 "dma_device_id": "system", 00:11:45.005 "dma_device_type": 1 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.005 "dma_device_type": 2 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "dma_device_id": "system", 00:11:45.005 "dma_device_type": 1 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.005 "dma_device_type": 2 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "dma_device_id": "system", 00:11:45.005 "dma_device_type": 1 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.005 "dma_device_type": 2 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "dma_device_id": "system", 00:11:45.005 "dma_device_type": 1 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.005 "dma_device_type": 2 00:11:45.005 } 00:11:45.005 ], 00:11:45.005 "driver_specific": { 00:11:45.005 "raid": { 00:11:45.005 "uuid": "19d960e2-1a81-493b-a837-06a3416e4024", 00:11:45.005 "strip_size_kb": 64, 00:11:45.005 "state": "online", 00:11:45.005 "raid_level": "concat", 00:11:45.005 "superblock": true, 00:11:45.005 "num_base_bdevs": 4, 00:11:45.005 "num_base_bdevs_discovered": 4, 00:11:45.005 "num_base_bdevs_operational": 4, 00:11:45.005 "base_bdevs_list": [ 00:11:45.005 { 00:11:45.005 "name": "BaseBdev1", 00:11:45.005 "uuid": "7db4cf98-a63c-4b5a-92bf-390e15b94ca9", 00:11:45.005 "is_configured": true, 00:11:45.005 "data_offset": 2048, 00:11:45.005 "data_size": 63488 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "name": "BaseBdev2", 00:11:45.005 "uuid": "8343150e-bcac-43b3-a39f-805f51bb6d29", 00:11:45.005 "is_configured": true, 00:11:45.005 "data_offset": 2048, 00:11:45.005 "data_size": 63488 00:11:45.005 }, 00:11:45.005 { 00:11:45.005 "name": "BaseBdev3", 00:11:45.005 "uuid": "db081350-6580-47cc-bf1b-bb2cb49ab427", 00:11:45.005 "is_configured": true, 00:11:45.005 "data_offset": 2048, 00:11:45.006 "data_size": 63488 00:11:45.006 }, 00:11:45.006 { 00:11:45.006 "name": "BaseBdev4", 00:11:45.006 "uuid": "1e32d282-d97c-4b54-88de-058dcfc2499e", 00:11:45.006 "is_configured": true, 00:11:45.006 "data_offset": 2048, 00:11:45.006 "data_size": 63488 00:11:45.006 } 00:11:45.006 ] 00:11:45.006 } 00:11:45.006 } 00:11:45.006 }' 00:11:45.006 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.006 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:45.006 BaseBdev2 00:11:45.006 BaseBdev3 00:11:45.006 BaseBdev4' 00:11:45.006 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.284 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.284 [2024-11-17 01:31:53.697534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.284 [2024-11-17 01:31:53.697565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.284 [2024-11-17 01:31:53.697613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.544 "name": "Existed_Raid", 00:11:45.544 "uuid": "19d960e2-1a81-493b-a837-06a3416e4024", 00:11:45.544 "strip_size_kb": 64, 00:11:45.544 "state": "offline", 00:11:45.544 "raid_level": "concat", 00:11:45.544 "superblock": true, 00:11:45.544 "num_base_bdevs": 4, 00:11:45.544 "num_base_bdevs_discovered": 3, 00:11:45.544 "num_base_bdevs_operational": 3, 00:11:45.544 "base_bdevs_list": [ 00:11:45.544 { 00:11:45.544 "name": null, 00:11:45.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.544 "is_configured": false, 00:11:45.544 "data_offset": 0, 00:11:45.544 "data_size": 63488 00:11:45.544 }, 00:11:45.544 { 00:11:45.544 "name": "BaseBdev2", 00:11:45.544 "uuid": "8343150e-bcac-43b3-a39f-805f51bb6d29", 00:11:45.544 "is_configured": true, 00:11:45.544 "data_offset": 2048, 00:11:45.544 "data_size": 63488 00:11:45.544 }, 00:11:45.544 { 00:11:45.544 "name": "BaseBdev3", 00:11:45.544 "uuid": "db081350-6580-47cc-bf1b-bb2cb49ab427", 00:11:45.544 "is_configured": true, 00:11:45.544 "data_offset": 2048, 00:11:45.544 "data_size": 63488 00:11:45.544 }, 00:11:45.544 { 00:11:45.544 "name": "BaseBdev4", 00:11:45.544 "uuid": "1e32d282-d97c-4b54-88de-058dcfc2499e", 00:11:45.544 "is_configured": true, 00:11:45.544 "data_offset": 2048, 00:11:45.544 "data_size": 63488 00:11:45.544 } 00:11:45.544 ] 00:11:45.544 }' 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.544 01:31:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.803 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:45.803 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.803 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.803 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.803 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.803 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.062 [2024-11-17 01:31:54.301503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.062 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.062 [2024-11-17 01:31:54.447670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 [2024-11-17 01:31:54.598592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:46.322 [2024-11-17 01:31:54.598638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.322 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.582 BaseBdev2 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.582 [ 00:11:46.582 { 00:11:46.582 "name": "BaseBdev2", 00:11:46.582 "aliases": [ 00:11:46.582 "30f05eb2-b720-415b-b1fa-9246d27d28e9" 00:11:46.582 ], 00:11:46.582 "product_name": "Malloc disk", 00:11:46.582 "block_size": 512, 00:11:46.582 "num_blocks": 65536, 00:11:46.582 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:46.582 "assigned_rate_limits": { 00:11:46.582 "rw_ios_per_sec": 0, 00:11:46.582 "rw_mbytes_per_sec": 0, 00:11:46.582 "r_mbytes_per_sec": 0, 00:11:46.582 "w_mbytes_per_sec": 0 00:11:46.582 }, 00:11:46.582 "claimed": false, 00:11:46.582 "zoned": false, 00:11:46.582 "supported_io_types": { 00:11:46.582 "read": true, 00:11:46.582 "write": true, 00:11:46.582 "unmap": true, 00:11:46.582 "flush": true, 00:11:46.582 "reset": true, 00:11:46.582 "nvme_admin": false, 00:11:46.582 "nvme_io": false, 00:11:46.582 "nvme_io_md": false, 00:11:46.582 "write_zeroes": true, 00:11:46.582 "zcopy": true, 00:11:46.582 "get_zone_info": false, 00:11:46.582 "zone_management": false, 00:11:46.582 "zone_append": false, 00:11:46.582 "compare": false, 00:11:46.582 "compare_and_write": false, 00:11:46.582 "abort": true, 00:11:46.582 "seek_hole": false, 00:11:46.582 "seek_data": false, 00:11:46.582 "copy": true, 00:11:46.582 "nvme_iov_md": false 00:11:46.582 }, 00:11:46.582 "memory_domains": [ 00:11:46.582 { 00:11:46.582 "dma_device_id": "system", 00:11:46.582 "dma_device_type": 1 00:11:46.582 }, 00:11:46.582 { 00:11:46.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.582 "dma_device_type": 2 00:11:46.582 } 00:11:46.582 ], 00:11:46.582 "driver_specific": {} 00:11:46.582 } 00:11:46.582 ] 00:11:46.582 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.583 BaseBdev3 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.583 [ 00:11:46.583 { 00:11:46.583 "name": "BaseBdev3", 00:11:46.583 "aliases": [ 00:11:46.583 "00cd89d7-b4f0-48f8-8986-f9c492a59483" 00:11:46.583 ], 00:11:46.583 "product_name": "Malloc disk", 00:11:46.583 "block_size": 512, 00:11:46.583 "num_blocks": 65536, 00:11:46.583 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:46.583 "assigned_rate_limits": { 00:11:46.583 "rw_ios_per_sec": 0, 00:11:46.583 "rw_mbytes_per_sec": 0, 00:11:46.583 "r_mbytes_per_sec": 0, 00:11:46.583 "w_mbytes_per_sec": 0 00:11:46.583 }, 00:11:46.583 "claimed": false, 00:11:46.583 "zoned": false, 00:11:46.583 "supported_io_types": { 00:11:46.583 "read": true, 00:11:46.583 "write": true, 00:11:46.583 "unmap": true, 00:11:46.583 "flush": true, 00:11:46.583 "reset": true, 00:11:46.583 "nvme_admin": false, 00:11:46.583 "nvme_io": false, 00:11:46.583 "nvme_io_md": false, 00:11:46.583 "write_zeroes": true, 00:11:46.583 "zcopy": true, 00:11:46.583 "get_zone_info": false, 00:11:46.583 "zone_management": false, 00:11:46.583 "zone_append": false, 00:11:46.583 "compare": false, 00:11:46.583 "compare_and_write": false, 00:11:46.583 "abort": true, 00:11:46.583 "seek_hole": false, 00:11:46.583 "seek_data": false, 00:11:46.583 "copy": true, 00:11:46.583 "nvme_iov_md": false 00:11:46.583 }, 00:11:46.583 "memory_domains": [ 00:11:46.583 { 00:11:46.583 "dma_device_id": "system", 00:11:46.583 "dma_device_type": 1 00:11:46.583 }, 00:11:46.583 { 00:11:46.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.583 "dma_device_type": 2 00:11:46.583 } 00:11:46.583 ], 00:11:46.583 "driver_specific": {} 00:11:46.583 } 00:11:46.583 ] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.583 BaseBdev4 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.583 [ 00:11:46.583 { 00:11:46.583 "name": "BaseBdev4", 00:11:46.583 "aliases": [ 00:11:46.583 "2736f641-7871-4f7a-92c0-a50e9a695979" 00:11:46.583 ], 00:11:46.583 "product_name": "Malloc disk", 00:11:46.583 "block_size": 512, 00:11:46.583 "num_blocks": 65536, 00:11:46.583 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:46.583 "assigned_rate_limits": { 00:11:46.583 "rw_ios_per_sec": 0, 00:11:46.583 "rw_mbytes_per_sec": 0, 00:11:46.583 "r_mbytes_per_sec": 0, 00:11:46.583 "w_mbytes_per_sec": 0 00:11:46.583 }, 00:11:46.583 "claimed": false, 00:11:46.583 "zoned": false, 00:11:46.583 "supported_io_types": { 00:11:46.583 "read": true, 00:11:46.583 "write": true, 00:11:46.583 "unmap": true, 00:11:46.583 "flush": true, 00:11:46.583 "reset": true, 00:11:46.583 "nvme_admin": false, 00:11:46.583 "nvme_io": false, 00:11:46.583 "nvme_io_md": false, 00:11:46.583 "write_zeroes": true, 00:11:46.583 "zcopy": true, 00:11:46.583 "get_zone_info": false, 00:11:46.583 "zone_management": false, 00:11:46.583 "zone_append": false, 00:11:46.583 "compare": false, 00:11:46.583 "compare_and_write": false, 00:11:46.583 "abort": true, 00:11:46.583 "seek_hole": false, 00:11:46.583 "seek_data": false, 00:11:46.583 "copy": true, 00:11:46.583 "nvme_iov_md": false 00:11:46.583 }, 00:11:46.583 "memory_domains": [ 00:11:46.583 { 00:11:46.583 "dma_device_id": "system", 00:11:46.583 "dma_device_type": 1 00:11:46.583 }, 00:11:46.583 { 00:11:46.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.583 "dma_device_type": 2 00:11:46.583 } 00:11:46.583 ], 00:11:46.583 "driver_specific": {} 00:11:46.583 } 00:11:46.583 ] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.583 01:31:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.583 [2024-11-17 01:31:54.997707] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:46.583 [2024-11-17 01:31:54.997817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:46.583 [2024-11-17 01:31:54.997864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.583 [2024-11-17 01:31:54.999701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.583 [2024-11-17 01:31:54.999803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.583 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.584 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.584 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.584 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.843 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.843 "name": "Existed_Raid", 00:11:46.843 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:46.843 "strip_size_kb": 64, 00:11:46.843 "state": "configuring", 00:11:46.843 "raid_level": "concat", 00:11:46.843 "superblock": true, 00:11:46.843 "num_base_bdevs": 4, 00:11:46.843 "num_base_bdevs_discovered": 3, 00:11:46.843 "num_base_bdevs_operational": 4, 00:11:46.843 "base_bdevs_list": [ 00:11:46.843 { 00:11:46.843 "name": "BaseBdev1", 00:11:46.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.843 "is_configured": false, 00:11:46.843 "data_offset": 0, 00:11:46.843 "data_size": 0 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "name": "BaseBdev2", 00:11:46.843 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:46.843 "is_configured": true, 00:11:46.843 "data_offset": 2048, 00:11:46.843 "data_size": 63488 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "name": "BaseBdev3", 00:11:46.843 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:46.843 "is_configured": true, 00:11:46.843 "data_offset": 2048, 00:11:46.843 "data_size": 63488 00:11:46.843 }, 00:11:46.843 { 00:11:46.843 "name": "BaseBdev4", 00:11:46.843 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:46.843 "is_configured": true, 00:11:46.843 "data_offset": 2048, 00:11:46.843 "data_size": 63488 00:11:46.843 } 00:11:46.843 ] 00:11:46.843 }' 00:11:46.843 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.843 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.102 [2024-11-17 01:31:55.456915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.102 "name": "Existed_Raid", 00:11:47.102 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:47.102 "strip_size_kb": 64, 00:11:47.102 "state": "configuring", 00:11:47.102 "raid_level": "concat", 00:11:47.102 "superblock": true, 00:11:47.102 "num_base_bdevs": 4, 00:11:47.102 "num_base_bdevs_discovered": 2, 00:11:47.102 "num_base_bdevs_operational": 4, 00:11:47.102 "base_bdevs_list": [ 00:11:47.102 { 00:11:47.102 "name": "BaseBdev1", 00:11:47.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.102 "is_configured": false, 00:11:47.102 "data_offset": 0, 00:11:47.102 "data_size": 0 00:11:47.102 }, 00:11:47.102 { 00:11:47.102 "name": null, 00:11:47.102 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:47.102 "is_configured": false, 00:11:47.102 "data_offset": 0, 00:11:47.102 "data_size": 63488 00:11:47.102 }, 00:11:47.102 { 00:11:47.102 "name": "BaseBdev3", 00:11:47.102 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:47.102 "is_configured": true, 00:11:47.102 "data_offset": 2048, 00:11:47.102 "data_size": 63488 00:11:47.102 }, 00:11:47.102 { 00:11:47.102 "name": "BaseBdev4", 00:11:47.102 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:47.102 "is_configured": true, 00:11:47.102 "data_offset": 2048, 00:11:47.102 "data_size": 63488 00:11:47.102 } 00:11:47.102 ] 00:11:47.102 }' 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.102 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 [2024-11-17 01:31:55.995100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.673 BaseBdev1 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.673 01:31:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 [ 00:11:47.673 { 00:11:47.673 "name": "BaseBdev1", 00:11:47.673 "aliases": [ 00:11:47.673 "cb2110eb-b1fc-408c-9416-12ba7db069c5" 00:11:47.673 ], 00:11:47.673 "product_name": "Malloc disk", 00:11:47.673 "block_size": 512, 00:11:47.673 "num_blocks": 65536, 00:11:47.673 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:47.673 "assigned_rate_limits": { 00:11:47.673 "rw_ios_per_sec": 0, 00:11:47.673 "rw_mbytes_per_sec": 0, 00:11:47.673 "r_mbytes_per_sec": 0, 00:11:47.673 "w_mbytes_per_sec": 0 00:11:47.673 }, 00:11:47.673 "claimed": true, 00:11:47.673 "claim_type": "exclusive_write", 00:11:47.673 "zoned": false, 00:11:47.673 "supported_io_types": { 00:11:47.673 "read": true, 00:11:47.673 "write": true, 00:11:47.673 "unmap": true, 00:11:47.673 "flush": true, 00:11:47.673 "reset": true, 00:11:47.673 "nvme_admin": false, 00:11:47.673 "nvme_io": false, 00:11:47.673 "nvme_io_md": false, 00:11:47.673 "write_zeroes": true, 00:11:47.673 "zcopy": true, 00:11:47.673 "get_zone_info": false, 00:11:47.673 "zone_management": false, 00:11:47.673 "zone_append": false, 00:11:47.673 "compare": false, 00:11:47.673 "compare_and_write": false, 00:11:47.673 "abort": true, 00:11:47.673 "seek_hole": false, 00:11:47.673 "seek_data": false, 00:11:47.673 "copy": true, 00:11:47.673 "nvme_iov_md": false 00:11:47.673 }, 00:11:47.673 "memory_domains": [ 00:11:47.673 { 00:11:47.673 "dma_device_id": "system", 00:11:47.673 "dma_device_type": 1 00:11:47.673 }, 00:11:47.673 { 00:11:47.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.673 "dma_device_type": 2 00:11:47.673 } 00:11:47.673 ], 00:11:47.673 "driver_specific": {} 00:11:47.673 } 00:11:47.673 ] 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.673 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.673 "name": "Existed_Raid", 00:11:47.673 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:47.673 "strip_size_kb": 64, 00:11:47.673 "state": "configuring", 00:11:47.673 "raid_level": "concat", 00:11:47.673 "superblock": true, 00:11:47.673 "num_base_bdevs": 4, 00:11:47.673 "num_base_bdevs_discovered": 3, 00:11:47.673 "num_base_bdevs_operational": 4, 00:11:47.673 "base_bdevs_list": [ 00:11:47.673 { 00:11:47.673 "name": "BaseBdev1", 00:11:47.673 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:47.673 "is_configured": true, 00:11:47.673 "data_offset": 2048, 00:11:47.673 "data_size": 63488 00:11:47.673 }, 00:11:47.673 { 00:11:47.673 "name": null, 00:11:47.673 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:47.674 "is_configured": false, 00:11:47.674 "data_offset": 0, 00:11:47.674 "data_size": 63488 00:11:47.674 }, 00:11:47.674 { 00:11:47.674 "name": "BaseBdev3", 00:11:47.674 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:47.674 "is_configured": true, 00:11:47.674 "data_offset": 2048, 00:11:47.674 "data_size": 63488 00:11:47.674 }, 00:11:47.674 { 00:11:47.674 "name": "BaseBdev4", 00:11:47.674 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:47.674 "is_configured": true, 00:11:47.674 "data_offset": 2048, 00:11:47.674 "data_size": 63488 00:11:47.674 } 00:11:47.674 ] 00:11:47.674 }' 00:11:47.674 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.674 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.243 [2024-11-17 01:31:56.522250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.243 "name": "Existed_Raid", 00:11:48.243 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:48.243 "strip_size_kb": 64, 00:11:48.243 "state": "configuring", 00:11:48.243 "raid_level": "concat", 00:11:48.243 "superblock": true, 00:11:48.243 "num_base_bdevs": 4, 00:11:48.243 "num_base_bdevs_discovered": 2, 00:11:48.243 "num_base_bdevs_operational": 4, 00:11:48.243 "base_bdevs_list": [ 00:11:48.243 { 00:11:48.243 "name": "BaseBdev1", 00:11:48.243 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:48.243 "is_configured": true, 00:11:48.243 "data_offset": 2048, 00:11:48.243 "data_size": 63488 00:11:48.243 }, 00:11:48.243 { 00:11:48.243 "name": null, 00:11:48.243 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:48.243 "is_configured": false, 00:11:48.243 "data_offset": 0, 00:11:48.243 "data_size": 63488 00:11:48.243 }, 00:11:48.243 { 00:11:48.243 "name": null, 00:11:48.243 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:48.243 "is_configured": false, 00:11:48.243 "data_offset": 0, 00:11:48.243 "data_size": 63488 00:11:48.243 }, 00:11:48.243 { 00:11:48.243 "name": "BaseBdev4", 00:11:48.243 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:48.243 "is_configured": true, 00:11:48.243 "data_offset": 2048, 00:11:48.243 "data_size": 63488 00:11:48.243 } 00:11:48.243 ] 00:11:48.243 }' 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.243 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.810 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.810 01:31:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.810 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.810 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.810 01:31:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.810 [2024-11-17 01:31:57.009438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.810 "name": "Existed_Raid", 00:11:48.810 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:48.810 "strip_size_kb": 64, 00:11:48.810 "state": "configuring", 00:11:48.810 "raid_level": "concat", 00:11:48.810 "superblock": true, 00:11:48.810 "num_base_bdevs": 4, 00:11:48.810 "num_base_bdevs_discovered": 3, 00:11:48.810 "num_base_bdevs_operational": 4, 00:11:48.810 "base_bdevs_list": [ 00:11:48.810 { 00:11:48.810 "name": "BaseBdev1", 00:11:48.810 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:48.810 "is_configured": true, 00:11:48.810 "data_offset": 2048, 00:11:48.810 "data_size": 63488 00:11:48.810 }, 00:11:48.810 { 00:11:48.810 "name": null, 00:11:48.810 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:48.810 "is_configured": false, 00:11:48.810 "data_offset": 0, 00:11:48.810 "data_size": 63488 00:11:48.810 }, 00:11:48.810 { 00:11:48.810 "name": "BaseBdev3", 00:11:48.810 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:48.810 "is_configured": true, 00:11:48.810 "data_offset": 2048, 00:11:48.810 "data_size": 63488 00:11:48.810 }, 00:11:48.810 { 00:11:48.810 "name": "BaseBdev4", 00:11:48.810 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:48.810 "is_configured": true, 00:11:48.810 "data_offset": 2048, 00:11:48.810 "data_size": 63488 00:11:48.810 } 00:11:48.810 ] 00:11:48.810 }' 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.810 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.070 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.070 [2024-11-17 01:31:57.492646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.329 "name": "Existed_Raid", 00:11:49.329 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:49.329 "strip_size_kb": 64, 00:11:49.329 "state": "configuring", 00:11:49.329 "raid_level": "concat", 00:11:49.329 "superblock": true, 00:11:49.329 "num_base_bdevs": 4, 00:11:49.329 "num_base_bdevs_discovered": 2, 00:11:49.329 "num_base_bdevs_operational": 4, 00:11:49.329 "base_bdevs_list": [ 00:11:49.329 { 00:11:49.329 "name": null, 00:11:49.329 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:49.329 "is_configured": false, 00:11:49.329 "data_offset": 0, 00:11:49.329 "data_size": 63488 00:11:49.329 }, 00:11:49.329 { 00:11:49.329 "name": null, 00:11:49.329 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:49.329 "is_configured": false, 00:11:49.329 "data_offset": 0, 00:11:49.329 "data_size": 63488 00:11:49.329 }, 00:11:49.329 { 00:11:49.329 "name": "BaseBdev3", 00:11:49.329 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:49.329 "is_configured": true, 00:11:49.329 "data_offset": 2048, 00:11:49.329 "data_size": 63488 00:11:49.329 }, 00:11:49.329 { 00:11:49.329 "name": "BaseBdev4", 00:11:49.329 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:49.329 "is_configured": true, 00:11:49.329 "data_offset": 2048, 00:11:49.329 "data_size": 63488 00:11:49.329 } 00:11:49.329 ] 00:11:49.329 }' 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.329 01:31:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.589 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:49.589 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.589 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.849 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.849 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:49.849 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.850 [2024-11-17 01:31:58.082996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.850 "name": "Existed_Raid", 00:11:49.850 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:49.850 "strip_size_kb": 64, 00:11:49.850 "state": "configuring", 00:11:49.850 "raid_level": "concat", 00:11:49.850 "superblock": true, 00:11:49.850 "num_base_bdevs": 4, 00:11:49.850 "num_base_bdevs_discovered": 3, 00:11:49.850 "num_base_bdevs_operational": 4, 00:11:49.850 "base_bdevs_list": [ 00:11:49.850 { 00:11:49.850 "name": null, 00:11:49.850 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:49.850 "is_configured": false, 00:11:49.850 "data_offset": 0, 00:11:49.850 "data_size": 63488 00:11:49.850 }, 00:11:49.850 { 00:11:49.850 "name": "BaseBdev2", 00:11:49.850 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:49.850 "is_configured": true, 00:11:49.850 "data_offset": 2048, 00:11:49.850 "data_size": 63488 00:11:49.850 }, 00:11:49.850 { 00:11:49.850 "name": "BaseBdev3", 00:11:49.850 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:49.850 "is_configured": true, 00:11:49.850 "data_offset": 2048, 00:11:49.850 "data_size": 63488 00:11:49.850 }, 00:11:49.850 { 00:11:49.850 "name": "BaseBdev4", 00:11:49.850 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:49.850 "is_configured": true, 00:11:49.850 "data_offset": 2048, 00:11:49.850 "data_size": 63488 00:11:49.850 } 00:11:49.850 ] 00:11:49.850 }' 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.850 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.110 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cb2110eb-b1fc-408c-9416-12ba7db069c5 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.370 [2024-11-17 01:31:58.610723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:50.370 [2024-11-17 01:31:58.610979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:50.370 [2024-11-17 01:31:58.610992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.370 [2024-11-17 01:31:58.611242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:50.370 [2024-11-17 01:31:58.611413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:50.370 [2024-11-17 01:31:58.611427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:50.370 [2024-11-17 01:31:58.611552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.370 NewBaseBdev 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.370 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.370 [ 00:11:50.370 { 00:11:50.370 "name": "NewBaseBdev", 00:11:50.370 "aliases": [ 00:11:50.370 "cb2110eb-b1fc-408c-9416-12ba7db069c5" 00:11:50.370 ], 00:11:50.370 "product_name": "Malloc disk", 00:11:50.370 "block_size": 512, 00:11:50.370 "num_blocks": 65536, 00:11:50.370 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:50.370 "assigned_rate_limits": { 00:11:50.370 "rw_ios_per_sec": 0, 00:11:50.370 "rw_mbytes_per_sec": 0, 00:11:50.370 "r_mbytes_per_sec": 0, 00:11:50.370 "w_mbytes_per_sec": 0 00:11:50.370 }, 00:11:50.370 "claimed": true, 00:11:50.370 "claim_type": "exclusive_write", 00:11:50.370 "zoned": false, 00:11:50.370 "supported_io_types": { 00:11:50.370 "read": true, 00:11:50.370 "write": true, 00:11:50.370 "unmap": true, 00:11:50.370 "flush": true, 00:11:50.370 "reset": true, 00:11:50.371 "nvme_admin": false, 00:11:50.371 "nvme_io": false, 00:11:50.371 "nvme_io_md": false, 00:11:50.371 "write_zeroes": true, 00:11:50.371 "zcopy": true, 00:11:50.371 "get_zone_info": false, 00:11:50.371 "zone_management": false, 00:11:50.371 "zone_append": false, 00:11:50.371 "compare": false, 00:11:50.371 "compare_and_write": false, 00:11:50.371 "abort": true, 00:11:50.371 "seek_hole": false, 00:11:50.371 "seek_data": false, 00:11:50.371 "copy": true, 00:11:50.371 "nvme_iov_md": false 00:11:50.371 }, 00:11:50.371 "memory_domains": [ 00:11:50.371 { 00:11:50.371 "dma_device_id": "system", 00:11:50.371 "dma_device_type": 1 00:11:50.371 }, 00:11:50.371 { 00:11:50.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.371 "dma_device_type": 2 00:11:50.371 } 00:11:50.371 ], 00:11:50.371 "driver_specific": {} 00:11:50.371 } 00:11:50.371 ] 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.371 "name": "Existed_Raid", 00:11:50.371 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:50.371 "strip_size_kb": 64, 00:11:50.371 "state": "online", 00:11:50.371 "raid_level": "concat", 00:11:50.371 "superblock": true, 00:11:50.371 "num_base_bdevs": 4, 00:11:50.371 "num_base_bdevs_discovered": 4, 00:11:50.371 "num_base_bdevs_operational": 4, 00:11:50.371 "base_bdevs_list": [ 00:11:50.371 { 00:11:50.371 "name": "NewBaseBdev", 00:11:50.371 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:50.371 "is_configured": true, 00:11:50.371 "data_offset": 2048, 00:11:50.371 "data_size": 63488 00:11:50.371 }, 00:11:50.371 { 00:11:50.371 "name": "BaseBdev2", 00:11:50.371 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:50.371 "is_configured": true, 00:11:50.371 "data_offset": 2048, 00:11:50.371 "data_size": 63488 00:11:50.371 }, 00:11:50.371 { 00:11:50.371 "name": "BaseBdev3", 00:11:50.371 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:50.371 "is_configured": true, 00:11:50.371 "data_offset": 2048, 00:11:50.371 "data_size": 63488 00:11:50.371 }, 00:11:50.371 { 00:11:50.371 "name": "BaseBdev4", 00:11:50.371 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:50.371 "is_configured": true, 00:11:50.371 "data_offset": 2048, 00:11:50.371 "data_size": 63488 00:11:50.371 } 00:11:50.371 ] 00:11:50.371 }' 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.371 01:31:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.631 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.631 [2024-11-17 01:31:59.078294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.912 "name": "Existed_Raid", 00:11:50.912 "aliases": [ 00:11:50.912 "251d6b67-8af9-4306-b79d-3c1e99103d3f" 00:11:50.912 ], 00:11:50.912 "product_name": "Raid Volume", 00:11:50.912 "block_size": 512, 00:11:50.912 "num_blocks": 253952, 00:11:50.912 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:50.912 "assigned_rate_limits": { 00:11:50.912 "rw_ios_per_sec": 0, 00:11:50.912 "rw_mbytes_per_sec": 0, 00:11:50.912 "r_mbytes_per_sec": 0, 00:11:50.912 "w_mbytes_per_sec": 0 00:11:50.912 }, 00:11:50.912 "claimed": false, 00:11:50.912 "zoned": false, 00:11:50.912 "supported_io_types": { 00:11:50.912 "read": true, 00:11:50.912 "write": true, 00:11:50.912 "unmap": true, 00:11:50.912 "flush": true, 00:11:50.912 "reset": true, 00:11:50.912 "nvme_admin": false, 00:11:50.912 "nvme_io": false, 00:11:50.912 "nvme_io_md": false, 00:11:50.912 "write_zeroes": true, 00:11:50.912 "zcopy": false, 00:11:50.912 "get_zone_info": false, 00:11:50.912 "zone_management": false, 00:11:50.912 "zone_append": false, 00:11:50.912 "compare": false, 00:11:50.912 "compare_and_write": false, 00:11:50.912 "abort": false, 00:11:50.912 "seek_hole": false, 00:11:50.912 "seek_data": false, 00:11:50.912 "copy": false, 00:11:50.912 "nvme_iov_md": false 00:11:50.912 }, 00:11:50.912 "memory_domains": [ 00:11:50.912 { 00:11:50.912 "dma_device_id": "system", 00:11:50.912 "dma_device_type": 1 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.912 "dma_device_type": 2 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "dma_device_id": "system", 00:11:50.912 "dma_device_type": 1 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.912 "dma_device_type": 2 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "dma_device_id": "system", 00:11:50.912 "dma_device_type": 1 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.912 "dma_device_type": 2 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "dma_device_id": "system", 00:11:50.912 "dma_device_type": 1 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.912 "dma_device_type": 2 00:11:50.912 } 00:11:50.912 ], 00:11:50.912 "driver_specific": { 00:11:50.912 "raid": { 00:11:50.912 "uuid": "251d6b67-8af9-4306-b79d-3c1e99103d3f", 00:11:50.912 "strip_size_kb": 64, 00:11:50.912 "state": "online", 00:11:50.912 "raid_level": "concat", 00:11:50.912 "superblock": true, 00:11:50.912 "num_base_bdevs": 4, 00:11:50.912 "num_base_bdevs_discovered": 4, 00:11:50.912 "num_base_bdevs_operational": 4, 00:11:50.912 "base_bdevs_list": [ 00:11:50.912 { 00:11:50.912 "name": "NewBaseBdev", 00:11:50.912 "uuid": "cb2110eb-b1fc-408c-9416-12ba7db069c5", 00:11:50.912 "is_configured": true, 00:11:50.912 "data_offset": 2048, 00:11:50.912 "data_size": 63488 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "name": "BaseBdev2", 00:11:50.912 "uuid": "30f05eb2-b720-415b-b1fa-9246d27d28e9", 00:11:50.912 "is_configured": true, 00:11:50.912 "data_offset": 2048, 00:11:50.912 "data_size": 63488 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "name": "BaseBdev3", 00:11:50.912 "uuid": "00cd89d7-b4f0-48f8-8986-f9c492a59483", 00:11:50.912 "is_configured": true, 00:11:50.912 "data_offset": 2048, 00:11:50.912 "data_size": 63488 00:11:50.912 }, 00:11:50.912 { 00:11:50.912 "name": "BaseBdev4", 00:11:50.912 "uuid": "2736f641-7871-4f7a-92c0-a50e9a695979", 00:11:50.912 "is_configured": true, 00:11:50.912 "data_offset": 2048, 00:11:50.912 "data_size": 63488 00:11:50.912 } 00:11:50.912 ] 00:11:50.912 } 00:11:50.912 } 00:11:50.912 }' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:50.912 BaseBdev2 00:11:50.912 BaseBdev3 00:11:50.912 BaseBdev4' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.912 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.913 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.171 [2024-11-17 01:31:59.409390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.171 [2024-11-17 01:31:59.409418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.171 [2024-11-17 01:31:59.409488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.171 [2024-11-17 01:31:59.409554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.171 [2024-11-17 01:31:59.409564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71725 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71725 ']' 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71725 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71725 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.171 killing process with pid 71725 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71725' 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71725 00:11:51.171 [2024-11-17 01:31:59.457145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.171 01:31:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71725 00:11:51.450 [2024-11-17 01:31:59.836919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.840 01:32:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:52.840 00:11:52.840 real 0m11.482s 00:11:52.840 user 0m18.300s 00:11:52.840 sys 0m2.095s 00:11:52.840 01:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.840 ************************************ 00:11:52.840 END TEST raid_state_function_test_sb 00:11:52.840 ************************************ 00:11:52.840 01:32:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.840 01:32:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:52.840 01:32:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:52.840 01:32:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.840 01:32:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.840 ************************************ 00:11:52.840 START TEST raid_superblock_test 00:11:52.840 ************************************ 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72397 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72397 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72397 ']' 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.840 01:32:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.841 01:32:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.841 01:32:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.841 01:32:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.841 [2024-11-17 01:32:01.047882] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:52.841 [2024-11-17 01:32:01.048077] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72397 ] 00:11:52.841 [2024-11-17 01:32:01.222391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.100 [2024-11-17 01:32:01.337098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.100 [2024-11-17 01:32:01.529941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.100 [2024-11-17 01:32:01.529993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.671 malloc1 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.671 [2024-11-17 01:32:01.921122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:53.671 [2024-11-17 01:32:01.921253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.671 [2024-11-17 01:32:01.921297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:53.671 [2024-11-17 01:32:01.921328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.671 [2024-11-17 01:32:01.923426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.671 [2024-11-17 01:32:01.923499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:53.671 pt1 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.671 malloc2 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.671 [2024-11-17 01:32:01.977417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:53.671 [2024-11-17 01:32:01.977468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.671 [2024-11-17 01:32:01.977487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:53.671 [2024-11-17 01:32:01.977496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.671 [2024-11-17 01:32:01.979549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.671 [2024-11-17 01:32:01.979598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:53.671 pt2 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.671 01:32:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.671 malloc3 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.671 [2024-11-17 01:32:02.043192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:53.671 [2024-11-17 01:32:02.043294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.671 [2024-11-17 01:32:02.043333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:53.671 [2024-11-17 01:32:02.043362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.671 [2024-11-17 01:32:02.045437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.671 [2024-11-17 01:32:02.045516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:53.671 pt3 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:53.671 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.672 malloc4 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.672 [2024-11-17 01:32:02.100842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:53.672 [2024-11-17 01:32:02.100945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.672 [2024-11-17 01:32:02.100980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:53.672 [2024-11-17 01:32:02.101010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.672 [2024-11-17 01:32:02.103061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.672 [2024-11-17 01:32:02.103155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:53.672 pt4 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.672 [2024-11-17 01:32:02.112845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:53.672 [2024-11-17 01:32:02.114617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:53.672 [2024-11-17 01:32:02.114680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:53.672 [2024-11-17 01:32:02.114743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:53.672 [2024-11-17 01:32:02.114974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:53.672 [2024-11-17 01:32:02.114987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:53.672 [2024-11-17 01:32:02.115238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:53.672 [2024-11-17 01:32:02.115416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:53.672 [2024-11-17 01:32:02.115430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:53.672 [2024-11-17 01:32:02.115581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.672 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.931 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.931 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.931 "name": "raid_bdev1", 00:11:53.931 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:53.931 "strip_size_kb": 64, 00:11:53.931 "state": "online", 00:11:53.931 "raid_level": "concat", 00:11:53.931 "superblock": true, 00:11:53.931 "num_base_bdevs": 4, 00:11:53.931 "num_base_bdevs_discovered": 4, 00:11:53.931 "num_base_bdevs_operational": 4, 00:11:53.931 "base_bdevs_list": [ 00:11:53.931 { 00:11:53.931 "name": "pt1", 00:11:53.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.931 "is_configured": true, 00:11:53.931 "data_offset": 2048, 00:11:53.931 "data_size": 63488 00:11:53.931 }, 00:11:53.931 { 00:11:53.931 "name": "pt2", 00:11:53.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.931 "is_configured": true, 00:11:53.931 "data_offset": 2048, 00:11:53.931 "data_size": 63488 00:11:53.931 }, 00:11:53.931 { 00:11:53.931 "name": "pt3", 00:11:53.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.931 "is_configured": true, 00:11:53.931 "data_offset": 2048, 00:11:53.931 "data_size": 63488 00:11:53.931 }, 00:11:53.931 { 00:11:53.931 "name": "pt4", 00:11:53.931 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.931 "is_configured": true, 00:11:53.931 "data_offset": 2048, 00:11:53.931 "data_size": 63488 00:11:53.931 } 00:11:53.931 ] 00:11:53.931 }' 00:11:53.931 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.931 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.191 [2024-11-17 01:32:02.580330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.191 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.191 "name": "raid_bdev1", 00:11:54.191 "aliases": [ 00:11:54.191 "d28fd45b-ec21-4b61-8ce3-d00687e33c6c" 00:11:54.191 ], 00:11:54.191 "product_name": "Raid Volume", 00:11:54.191 "block_size": 512, 00:11:54.191 "num_blocks": 253952, 00:11:54.191 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:54.191 "assigned_rate_limits": { 00:11:54.191 "rw_ios_per_sec": 0, 00:11:54.191 "rw_mbytes_per_sec": 0, 00:11:54.191 "r_mbytes_per_sec": 0, 00:11:54.191 "w_mbytes_per_sec": 0 00:11:54.191 }, 00:11:54.191 "claimed": false, 00:11:54.191 "zoned": false, 00:11:54.191 "supported_io_types": { 00:11:54.191 "read": true, 00:11:54.191 "write": true, 00:11:54.191 "unmap": true, 00:11:54.191 "flush": true, 00:11:54.191 "reset": true, 00:11:54.191 "nvme_admin": false, 00:11:54.191 "nvme_io": false, 00:11:54.191 "nvme_io_md": false, 00:11:54.191 "write_zeroes": true, 00:11:54.191 "zcopy": false, 00:11:54.191 "get_zone_info": false, 00:11:54.191 "zone_management": false, 00:11:54.191 "zone_append": false, 00:11:54.191 "compare": false, 00:11:54.191 "compare_and_write": false, 00:11:54.191 "abort": false, 00:11:54.191 "seek_hole": false, 00:11:54.191 "seek_data": false, 00:11:54.191 "copy": false, 00:11:54.191 "nvme_iov_md": false 00:11:54.191 }, 00:11:54.191 "memory_domains": [ 00:11:54.191 { 00:11:54.191 "dma_device_id": "system", 00:11:54.191 "dma_device_type": 1 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.191 "dma_device_type": 2 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "dma_device_id": "system", 00:11:54.191 "dma_device_type": 1 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.191 "dma_device_type": 2 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "dma_device_id": "system", 00:11:54.191 "dma_device_type": 1 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.191 "dma_device_type": 2 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "dma_device_id": "system", 00:11:54.191 "dma_device_type": 1 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.191 "dma_device_type": 2 00:11:54.191 } 00:11:54.191 ], 00:11:54.191 "driver_specific": { 00:11:54.191 "raid": { 00:11:54.191 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:54.191 "strip_size_kb": 64, 00:11:54.191 "state": "online", 00:11:54.191 "raid_level": "concat", 00:11:54.191 "superblock": true, 00:11:54.191 "num_base_bdevs": 4, 00:11:54.191 "num_base_bdevs_discovered": 4, 00:11:54.191 "num_base_bdevs_operational": 4, 00:11:54.191 "base_bdevs_list": [ 00:11:54.191 { 00:11:54.191 "name": "pt1", 00:11:54.191 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.191 "is_configured": true, 00:11:54.191 "data_offset": 2048, 00:11:54.191 "data_size": 63488 00:11:54.191 }, 00:11:54.191 { 00:11:54.191 "name": "pt2", 00:11:54.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.192 "is_configured": true, 00:11:54.192 "data_offset": 2048, 00:11:54.192 "data_size": 63488 00:11:54.192 }, 00:11:54.192 { 00:11:54.192 "name": "pt3", 00:11:54.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.192 "is_configured": true, 00:11:54.192 "data_offset": 2048, 00:11:54.192 "data_size": 63488 00:11:54.192 }, 00:11:54.192 { 00:11:54.192 "name": "pt4", 00:11:54.192 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.192 "is_configured": true, 00:11:54.192 "data_offset": 2048, 00:11:54.192 "data_size": 63488 00:11:54.192 } 00:11:54.192 ] 00:11:54.192 } 00:11:54.192 } 00:11:54.192 }' 00:11:54.192 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:54.451 pt2 00:11:54.451 pt3 00:11:54.451 pt4' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.451 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:54.452 [2024-11-17 01:32:02.879724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.452 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d28fd45b-ec21-4b61-8ce3-d00687e33c6c 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d28fd45b-ec21-4b61-8ce3-d00687e33c6c ']' 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.712 [2024-11-17 01:32:02.927379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.712 [2024-11-17 01:32:02.927402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.712 [2024-11-17 01:32:02.927470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.712 [2024-11-17 01:32:02.927535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.712 [2024-11-17 01:32:02.927548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.712 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:54.713 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.713 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.713 01:32:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:54.713 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 [2024-11-17 01:32:03.091159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:54.713 [2024-11-17 01:32:03.093020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:54.713 [2024-11-17 01:32:03.093111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:54.713 [2024-11-17 01:32:03.093175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:54.713 [2024-11-17 01:32:03.093259] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:54.713 [2024-11-17 01:32:03.093345] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:54.713 [2024-11-17 01:32:03.093406] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:54.713 [2024-11-17 01:32:03.093459] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:54.713 [2024-11-17 01:32:03.093517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.713 [2024-11-17 01:32:03.093549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:54.713 request: 00:11:54.713 { 00:11:54.713 "name": "raid_bdev1", 00:11:54.713 "raid_level": "concat", 00:11:54.713 "base_bdevs": [ 00:11:54.713 "malloc1", 00:11:54.713 "malloc2", 00:11:54.713 "malloc3", 00:11:54.713 "malloc4" 00:11:54.713 ], 00:11:54.713 "strip_size_kb": 64, 00:11:54.713 "superblock": false, 00:11:54.713 "method": "bdev_raid_create", 00:11:54.713 "req_id": 1 00:11:54.713 } 00:11:54.713 Got JSON-RPC error response 00:11:54.713 response: 00:11:54.713 { 00:11:54.713 "code": -17, 00:11:54.713 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:54.713 } 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.713 [2024-11-17 01:32:03.158989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:54.713 [2024-11-17 01:32:03.159098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.713 [2024-11-17 01:32:03.159130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:54.713 [2024-11-17 01:32:03.159160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.713 [2024-11-17 01:32:03.161282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.713 [2024-11-17 01:32:03.161355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:54.713 [2024-11-17 01:32:03.161442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:54.713 [2024-11-17 01:32:03.161543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:54.713 pt1 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.713 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.976 "name": "raid_bdev1", 00:11:54.976 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:54.976 "strip_size_kb": 64, 00:11:54.976 "state": "configuring", 00:11:54.976 "raid_level": "concat", 00:11:54.976 "superblock": true, 00:11:54.976 "num_base_bdevs": 4, 00:11:54.976 "num_base_bdevs_discovered": 1, 00:11:54.976 "num_base_bdevs_operational": 4, 00:11:54.976 "base_bdevs_list": [ 00:11:54.976 { 00:11:54.976 "name": "pt1", 00:11:54.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.976 "is_configured": true, 00:11:54.976 "data_offset": 2048, 00:11:54.976 "data_size": 63488 00:11:54.976 }, 00:11:54.976 { 00:11:54.976 "name": null, 00:11:54.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.976 "is_configured": false, 00:11:54.976 "data_offset": 2048, 00:11:54.976 "data_size": 63488 00:11:54.976 }, 00:11:54.976 { 00:11:54.976 "name": null, 00:11:54.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.976 "is_configured": false, 00:11:54.976 "data_offset": 2048, 00:11:54.976 "data_size": 63488 00:11:54.976 }, 00:11:54.976 { 00:11:54.976 "name": null, 00:11:54.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.976 "is_configured": false, 00:11:54.976 "data_offset": 2048, 00:11:54.976 "data_size": 63488 00:11:54.976 } 00:11:54.976 ] 00:11:54.976 }' 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.976 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.237 [2024-11-17 01:32:03.602272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:55.237 [2024-11-17 01:32:03.602343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.237 [2024-11-17 01:32:03.602361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:55.237 [2024-11-17 01:32:03.602372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.237 [2024-11-17 01:32:03.602783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.237 [2024-11-17 01:32:03.602804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:55.237 [2024-11-17 01:32:03.602882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:55.237 [2024-11-17 01:32:03.602904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.237 pt2 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.237 [2024-11-17 01:32:03.614251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.237 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.237 "name": "raid_bdev1", 00:11:55.237 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:55.237 "strip_size_kb": 64, 00:11:55.237 "state": "configuring", 00:11:55.237 "raid_level": "concat", 00:11:55.237 "superblock": true, 00:11:55.237 "num_base_bdevs": 4, 00:11:55.237 "num_base_bdevs_discovered": 1, 00:11:55.237 "num_base_bdevs_operational": 4, 00:11:55.237 "base_bdevs_list": [ 00:11:55.237 { 00:11:55.237 "name": "pt1", 00:11:55.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.237 "is_configured": true, 00:11:55.237 "data_offset": 2048, 00:11:55.237 "data_size": 63488 00:11:55.237 }, 00:11:55.237 { 00:11:55.237 "name": null, 00:11:55.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.237 "is_configured": false, 00:11:55.237 "data_offset": 0, 00:11:55.237 "data_size": 63488 00:11:55.237 }, 00:11:55.237 { 00:11:55.237 "name": null, 00:11:55.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.237 "is_configured": false, 00:11:55.237 "data_offset": 2048, 00:11:55.238 "data_size": 63488 00:11:55.238 }, 00:11:55.238 { 00:11:55.238 "name": null, 00:11:55.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.238 "is_configured": false, 00:11:55.238 "data_offset": 2048, 00:11:55.238 "data_size": 63488 00:11:55.238 } 00:11:55.238 ] 00:11:55.238 }' 00:11:55.238 01:32:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.238 01:32:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.808 [2024-11-17 01:32:04.049489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:55.808 [2024-11-17 01:32:04.049586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.808 [2024-11-17 01:32:04.049637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:55.808 [2024-11-17 01:32:04.049667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.808 [2024-11-17 01:32:04.050107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.808 [2024-11-17 01:32:04.050164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:55.808 [2024-11-17 01:32:04.050272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:55.808 [2024-11-17 01:32:04.050322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.808 pt2 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.808 [2024-11-17 01:32:04.061479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:55.808 [2024-11-17 01:32:04.061571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.808 [2024-11-17 01:32:04.061622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:55.808 [2024-11-17 01:32:04.061658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.808 [2024-11-17 01:32:04.062080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.808 [2024-11-17 01:32:04.062150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:55.808 [2024-11-17 01:32:04.062218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:55.808 [2024-11-17 01:32:04.062237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:55.808 pt3 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.808 [2024-11-17 01:32:04.069449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:55.808 [2024-11-17 01:32:04.069497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.808 [2024-11-17 01:32:04.069515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:55.808 [2024-11-17 01:32:04.069523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.808 [2024-11-17 01:32:04.069879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.808 [2024-11-17 01:32:04.069895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:55.808 [2024-11-17 01:32:04.069953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:55.808 [2024-11-17 01:32:04.069970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:55.808 [2024-11-17 01:32:04.070107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:55.808 [2024-11-17 01:32:04.070126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.808 [2024-11-17 01:32:04.070380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:55.808 [2024-11-17 01:32:04.070558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:55.808 [2024-11-17 01:32:04.070573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:55.808 [2024-11-17 01:32:04.070713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.808 pt4 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.808 "name": "raid_bdev1", 00:11:55.808 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:55.808 "strip_size_kb": 64, 00:11:55.808 "state": "online", 00:11:55.808 "raid_level": "concat", 00:11:55.808 "superblock": true, 00:11:55.808 "num_base_bdevs": 4, 00:11:55.808 "num_base_bdevs_discovered": 4, 00:11:55.808 "num_base_bdevs_operational": 4, 00:11:55.808 "base_bdevs_list": [ 00:11:55.808 { 00:11:55.808 "name": "pt1", 00:11:55.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.808 "is_configured": true, 00:11:55.808 "data_offset": 2048, 00:11:55.808 "data_size": 63488 00:11:55.808 }, 00:11:55.808 { 00:11:55.808 "name": "pt2", 00:11:55.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.808 "is_configured": true, 00:11:55.808 "data_offset": 2048, 00:11:55.808 "data_size": 63488 00:11:55.808 }, 00:11:55.808 { 00:11:55.808 "name": "pt3", 00:11:55.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.808 "is_configured": true, 00:11:55.808 "data_offset": 2048, 00:11:55.808 "data_size": 63488 00:11:55.808 }, 00:11:55.808 { 00:11:55.808 "name": "pt4", 00:11:55.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.808 "is_configured": true, 00:11:55.808 "data_offset": 2048, 00:11:55.808 "data_size": 63488 00:11:55.808 } 00:11:55.808 ] 00:11:55.808 }' 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.808 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.068 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.068 [2024-11-17 01:32:04.521024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.329 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.329 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.329 "name": "raid_bdev1", 00:11:56.330 "aliases": [ 00:11:56.330 "d28fd45b-ec21-4b61-8ce3-d00687e33c6c" 00:11:56.330 ], 00:11:56.330 "product_name": "Raid Volume", 00:11:56.330 "block_size": 512, 00:11:56.330 "num_blocks": 253952, 00:11:56.330 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:56.330 "assigned_rate_limits": { 00:11:56.330 "rw_ios_per_sec": 0, 00:11:56.330 "rw_mbytes_per_sec": 0, 00:11:56.330 "r_mbytes_per_sec": 0, 00:11:56.330 "w_mbytes_per_sec": 0 00:11:56.330 }, 00:11:56.330 "claimed": false, 00:11:56.330 "zoned": false, 00:11:56.330 "supported_io_types": { 00:11:56.330 "read": true, 00:11:56.330 "write": true, 00:11:56.330 "unmap": true, 00:11:56.330 "flush": true, 00:11:56.330 "reset": true, 00:11:56.330 "nvme_admin": false, 00:11:56.330 "nvme_io": false, 00:11:56.330 "nvme_io_md": false, 00:11:56.330 "write_zeroes": true, 00:11:56.330 "zcopy": false, 00:11:56.330 "get_zone_info": false, 00:11:56.330 "zone_management": false, 00:11:56.330 "zone_append": false, 00:11:56.330 "compare": false, 00:11:56.330 "compare_and_write": false, 00:11:56.330 "abort": false, 00:11:56.330 "seek_hole": false, 00:11:56.330 "seek_data": false, 00:11:56.330 "copy": false, 00:11:56.330 "nvme_iov_md": false 00:11:56.330 }, 00:11:56.330 "memory_domains": [ 00:11:56.330 { 00:11:56.330 "dma_device_id": "system", 00:11:56.330 "dma_device_type": 1 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.330 "dma_device_type": 2 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "dma_device_id": "system", 00:11:56.330 "dma_device_type": 1 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.330 "dma_device_type": 2 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "dma_device_id": "system", 00:11:56.330 "dma_device_type": 1 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.330 "dma_device_type": 2 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "dma_device_id": "system", 00:11:56.330 "dma_device_type": 1 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.330 "dma_device_type": 2 00:11:56.330 } 00:11:56.330 ], 00:11:56.330 "driver_specific": { 00:11:56.330 "raid": { 00:11:56.330 "uuid": "d28fd45b-ec21-4b61-8ce3-d00687e33c6c", 00:11:56.330 "strip_size_kb": 64, 00:11:56.330 "state": "online", 00:11:56.330 "raid_level": "concat", 00:11:56.330 "superblock": true, 00:11:56.330 "num_base_bdevs": 4, 00:11:56.330 "num_base_bdevs_discovered": 4, 00:11:56.330 "num_base_bdevs_operational": 4, 00:11:56.330 "base_bdevs_list": [ 00:11:56.330 { 00:11:56.330 "name": "pt1", 00:11:56.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.330 "is_configured": true, 00:11:56.330 "data_offset": 2048, 00:11:56.330 "data_size": 63488 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "name": "pt2", 00:11:56.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.330 "is_configured": true, 00:11:56.330 "data_offset": 2048, 00:11:56.330 "data_size": 63488 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "name": "pt3", 00:11:56.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.330 "is_configured": true, 00:11:56.330 "data_offset": 2048, 00:11:56.330 "data_size": 63488 00:11:56.330 }, 00:11:56.330 { 00:11:56.330 "name": "pt4", 00:11:56.330 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.330 "is_configured": true, 00:11:56.330 "data_offset": 2048, 00:11:56.330 "data_size": 63488 00:11:56.330 } 00:11:56.330 ] 00:11:56.330 } 00:11:56.330 } 00:11:56.330 }' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:56.330 pt2 00:11:56.330 pt3 00:11:56.330 pt4' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.330 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.590 [2024-11-17 01:32:04.832388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d28fd45b-ec21-4b61-8ce3-d00687e33c6c '!=' d28fd45b-ec21-4b61-8ce3-d00687e33c6c ']' 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:56.590 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72397 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72397 ']' 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72397 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72397 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72397' 00:11:56.591 killing process with pid 72397 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72397 00:11:56.591 [2024-11-17 01:32:04.898373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.591 [2024-11-17 01:32:04.898508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.591 01:32:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72397 00:11:56.591 [2024-11-17 01:32:04.898608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.591 [2024-11-17 01:32:04.898619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:56.850 [2024-11-17 01:32:05.285243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.231 01:32:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:58.231 00:11:58.231 real 0m5.384s 00:11:58.231 user 0m7.704s 00:11:58.231 sys 0m0.953s 00:11:58.231 01:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.231 01:32:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.231 ************************************ 00:11:58.231 END TEST raid_superblock_test 00:11:58.231 ************************************ 00:11:58.231 01:32:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:58.231 01:32:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:58.231 01:32:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.231 01:32:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.231 ************************************ 00:11:58.231 START TEST raid_read_error_test 00:11:58.231 ************************************ 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.87yITtLPP1 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72663 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72663 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72663 ']' 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.231 01:32:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.231 [2024-11-17 01:32:06.509785] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:58.231 [2024-11-17 01:32:06.509993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72663 ] 00:11:58.231 [2024-11-17 01:32:06.662753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.492 [2024-11-17 01:32:06.776602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.807 [2024-11-17 01:32:06.972890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.807 [2024-11-17 01:32:06.972947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.067 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.067 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:59.067 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.068 BaseBdev1_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.068 true 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.068 [2024-11-17 01:32:07.398702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:59.068 [2024-11-17 01:32:07.398778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.068 [2024-11-17 01:32:07.398799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:59.068 [2024-11-17 01:32:07.398810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.068 [2024-11-17 01:32:07.400937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.068 [2024-11-17 01:32:07.400975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.068 BaseBdev1 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.068 BaseBdev2_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.068 true 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.068 [2024-11-17 01:32:07.465795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:59.068 [2024-11-17 01:32:07.465848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.068 [2024-11-17 01:32:07.465880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:59.068 [2024-11-17 01:32:07.465891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.068 [2024-11-17 01:32:07.467969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.068 [2024-11-17 01:32:07.468021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:59.068 BaseBdev2 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.068 BaseBdev3_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.068 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 true 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 [2024-11-17 01:32:07.542058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:59.327 [2024-11-17 01:32:07.542153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.327 [2024-11-17 01:32:07.542189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:59.327 [2024-11-17 01:32:07.542199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.327 [2024-11-17 01:32:07.544271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.327 [2024-11-17 01:32:07.544311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:59.327 BaseBdev3 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 BaseBdev4_malloc 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 true 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 [2024-11-17 01:32:07.606810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:59.327 [2024-11-17 01:32:07.606854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.327 [2024-11-17 01:32:07.606870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:59.327 [2024-11-17 01:32:07.606880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.327 [2024-11-17 01:32:07.608877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.327 [2024-11-17 01:32:07.608917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:59.327 BaseBdev4 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 [2024-11-17 01:32:07.618838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.327 [2024-11-17 01:32:07.620614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.327 [2024-11-17 01:32:07.620683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.327 [2024-11-17 01:32:07.620745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.327 [2024-11-17 01:32:07.620992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:59.327 [2024-11-17 01:32:07.621007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:59.327 [2024-11-17 01:32:07.621230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:59.327 [2024-11-17 01:32:07.621407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:59.327 [2024-11-17 01:32:07.621418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:59.327 [2024-11-17 01:32:07.621580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.327 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.328 "name": "raid_bdev1", 00:11:59.328 "uuid": "a3c32c2d-d492-419d-a8f3-f95233002a3b", 00:11:59.328 "strip_size_kb": 64, 00:11:59.328 "state": "online", 00:11:59.328 "raid_level": "concat", 00:11:59.328 "superblock": true, 00:11:59.328 "num_base_bdevs": 4, 00:11:59.328 "num_base_bdevs_discovered": 4, 00:11:59.328 "num_base_bdevs_operational": 4, 00:11:59.328 "base_bdevs_list": [ 00:11:59.328 { 00:11:59.328 "name": "BaseBdev1", 00:11:59.328 "uuid": "b8b631ed-0b0f-5913-abcf-90f5d89565ab", 00:11:59.328 "is_configured": true, 00:11:59.328 "data_offset": 2048, 00:11:59.328 "data_size": 63488 00:11:59.328 }, 00:11:59.328 { 00:11:59.328 "name": "BaseBdev2", 00:11:59.328 "uuid": "f3e752cf-9074-5f5c-84cb-ac056a22a3d0", 00:11:59.328 "is_configured": true, 00:11:59.328 "data_offset": 2048, 00:11:59.328 "data_size": 63488 00:11:59.328 }, 00:11:59.328 { 00:11:59.328 "name": "BaseBdev3", 00:11:59.328 "uuid": "e1bd4d24-ac4a-54c9-9746-9869b3d193e2", 00:11:59.328 "is_configured": true, 00:11:59.328 "data_offset": 2048, 00:11:59.328 "data_size": 63488 00:11:59.328 }, 00:11:59.328 { 00:11:59.328 "name": "BaseBdev4", 00:11:59.328 "uuid": "f855e955-045b-5520-9748-ce2d07553ac4", 00:11:59.328 "is_configured": true, 00:11:59.328 "data_offset": 2048, 00:11:59.328 "data_size": 63488 00:11:59.328 } 00:11:59.328 ] 00:11:59.328 }' 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.328 01:32:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.896 01:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:59.896 01:32:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:59.896 [2024-11-17 01:32:08.151061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.836 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.837 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.837 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.837 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.837 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.837 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.837 "name": "raid_bdev1", 00:12:00.837 "uuid": "a3c32c2d-d492-419d-a8f3-f95233002a3b", 00:12:00.837 "strip_size_kb": 64, 00:12:00.837 "state": "online", 00:12:00.837 "raid_level": "concat", 00:12:00.837 "superblock": true, 00:12:00.837 "num_base_bdevs": 4, 00:12:00.837 "num_base_bdevs_discovered": 4, 00:12:00.837 "num_base_bdevs_operational": 4, 00:12:00.837 "base_bdevs_list": [ 00:12:00.837 { 00:12:00.837 "name": "BaseBdev1", 00:12:00.837 "uuid": "b8b631ed-0b0f-5913-abcf-90f5d89565ab", 00:12:00.837 "is_configured": true, 00:12:00.837 "data_offset": 2048, 00:12:00.837 "data_size": 63488 00:12:00.837 }, 00:12:00.837 { 00:12:00.837 "name": "BaseBdev2", 00:12:00.837 "uuid": "f3e752cf-9074-5f5c-84cb-ac056a22a3d0", 00:12:00.837 "is_configured": true, 00:12:00.837 "data_offset": 2048, 00:12:00.837 "data_size": 63488 00:12:00.837 }, 00:12:00.837 { 00:12:00.837 "name": "BaseBdev3", 00:12:00.837 "uuid": "e1bd4d24-ac4a-54c9-9746-9869b3d193e2", 00:12:00.837 "is_configured": true, 00:12:00.837 "data_offset": 2048, 00:12:00.837 "data_size": 63488 00:12:00.837 }, 00:12:00.837 { 00:12:00.837 "name": "BaseBdev4", 00:12:00.837 "uuid": "f855e955-045b-5520-9748-ce2d07553ac4", 00:12:00.837 "is_configured": true, 00:12:00.837 "data_offset": 2048, 00:12:00.837 "data_size": 63488 00:12:00.837 } 00:12:00.837 ] 00:12:00.837 }' 00:12:00.837 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.837 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.406 [2024-11-17 01:32:09.571040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.406 [2024-11-17 01:32:09.571135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.406 [2024-11-17 01:32:09.573896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.406 [2024-11-17 01:32:09.574009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.406 [2024-11-17 01:32:09.574075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.406 [2024-11-17 01:32:09.574144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:01.406 { 00:12:01.406 "results": [ 00:12:01.406 { 00:12:01.406 "job": "raid_bdev1", 00:12:01.406 "core_mask": "0x1", 00:12:01.406 "workload": "randrw", 00:12:01.406 "percentage": 50, 00:12:01.406 "status": "finished", 00:12:01.406 "queue_depth": 1, 00:12:01.406 "io_size": 131072, 00:12:01.406 "runtime": 1.420965, 00:12:01.406 "iops": 16318.487788228422, 00:12:01.406 "mibps": 2039.8109735285527, 00:12:01.406 "io_failed": 1, 00:12:01.406 "io_timeout": 0, 00:12:01.406 "avg_latency_us": 85.28854695260006, 00:12:01.406 "min_latency_us": 24.705676855895195, 00:12:01.406 "max_latency_us": 1359.3711790393013 00:12:01.406 } 00:12:01.406 ], 00:12:01.406 "core_count": 1 00:12:01.406 } 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72663 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72663 ']' 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72663 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72663 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72663' 00:12:01.406 killing process with pid 72663 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72663 00:12:01.406 [2024-11-17 01:32:09.616343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.406 01:32:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72663 00:12:01.666 [2024-11-17 01:32:09.929708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.605 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.87yITtLPP1 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:02.606 ************************************ 00:12:02.606 END TEST raid_read_error_test 00:12:02.606 ************************************ 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:02.606 00:12:02.606 real 0m4.635s 00:12:02.606 user 0m5.508s 00:12:02.606 sys 0m0.569s 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.606 01:32:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.867 01:32:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:02.867 01:32:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:02.867 01:32:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.867 01:32:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.867 ************************************ 00:12:02.867 START TEST raid_write_error_test 00:12:02.867 ************************************ 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EA31CNYVwE 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72805 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72805 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72805 ']' 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.867 01:32:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.867 [2024-11-17 01:32:11.237942] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:02.867 [2024-11-17 01:32:11.238173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72805 ] 00:12:03.128 [2024-11-17 01:32:11.410250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.128 [2024-11-17 01:32:11.522486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.387 [2024-11-17 01:32:11.714284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.387 [2024-11-17 01:32:11.714329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.647 BaseBdev1_malloc 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.647 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 true 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 [2024-11-17 01:32:12.112716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:03.907 [2024-11-17 01:32:12.112801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.907 [2024-11-17 01:32:12.112821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:03.907 [2024-11-17 01:32:12.112831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.907 [2024-11-17 01:32:12.114909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.907 [2024-11-17 01:32:12.114947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.907 BaseBdev1 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 BaseBdev2_malloc 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 true 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 [2024-11-17 01:32:12.179169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:03.907 [2024-11-17 01:32:12.179223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.907 [2024-11-17 01:32:12.179239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:03.907 [2024-11-17 01:32:12.179249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.907 [2024-11-17 01:32:12.181389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.907 [2024-11-17 01:32:12.181444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.907 BaseBdev2 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 BaseBdev3_malloc 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 true 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.907 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 [2024-11-17 01:32:12.257201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:03.907 [2024-11-17 01:32:12.257319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.907 [2024-11-17 01:32:12.257343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:03.907 [2024-11-17 01:32:12.257353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.907 [2024-11-17 01:32:12.259540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.908 [2024-11-17 01:32:12.259622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:03.908 BaseBdev3 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.908 BaseBdev4_malloc 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.908 true 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.908 [2024-11-17 01:32:12.325157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:03.908 [2024-11-17 01:32:12.325211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.908 [2024-11-17 01:32:12.325229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:03.908 [2024-11-17 01:32:12.325239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.908 [2024-11-17 01:32:12.327284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.908 [2024-11-17 01:32:12.327327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:03.908 BaseBdev4 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.908 [2024-11-17 01:32:12.337190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.908 [2024-11-17 01:32:12.338983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.908 [2024-11-17 01:32:12.339055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.908 [2024-11-17 01:32:12.339133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.908 [2024-11-17 01:32:12.339355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:03.908 [2024-11-17 01:32:12.339369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:03.908 [2024-11-17 01:32:12.339619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:03.908 [2024-11-17 01:32:12.339783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:03.908 [2024-11-17 01:32:12.339794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:03.908 [2024-11-17 01:32:12.339944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.908 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.167 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.167 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.167 "name": "raid_bdev1", 00:12:04.167 "uuid": "d751a3da-9e90-47db-93c9-433af171425e", 00:12:04.167 "strip_size_kb": 64, 00:12:04.167 "state": "online", 00:12:04.167 "raid_level": "concat", 00:12:04.167 "superblock": true, 00:12:04.167 "num_base_bdevs": 4, 00:12:04.167 "num_base_bdevs_discovered": 4, 00:12:04.167 "num_base_bdevs_operational": 4, 00:12:04.167 "base_bdevs_list": [ 00:12:04.167 { 00:12:04.167 "name": "BaseBdev1", 00:12:04.167 "uuid": "bbd72555-1dd2-5999-acd1-e6b2bfe46b57", 00:12:04.167 "is_configured": true, 00:12:04.167 "data_offset": 2048, 00:12:04.167 "data_size": 63488 00:12:04.167 }, 00:12:04.167 { 00:12:04.167 "name": "BaseBdev2", 00:12:04.167 "uuid": "f3dd57b3-ebdf-50c5-8503-c1de5945c0fa", 00:12:04.167 "is_configured": true, 00:12:04.167 "data_offset": 2048, 00:12:04.167 "data_size": 63488 00:12:04.167 }, 00:12:04.167 { 00:12:04.167 "name": "BaseBdev3", 00:12:04.167 "uuid": "1362efaa-77a3-5130-8f76-eea363ce3bfc", 00:12:04.167 "is_configured": true, 00:12:04.167 "data_offset": 2048, 00:12:04.167 "data_size": 63488 00:12:04.167 }, 00:12:04.167 { 00:12:04.167 "name": "BaseBdev4", 00:12:04.167 "uuid": "56186a1e-f0ad-5974-b5c7-66d88b7ed065", 00:12:04.167 "is_configured": true, 00:12:04.167 "data_offset": 2048, 00:12:04.167 "data_size": 63488 00:12:04.167 } 00:12:04.167 ] 00:12:04.167 }' 00:12:04.167 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.167 01:32:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.426 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:04.426 01:32:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:04.685 [2024-11-17 01:32:12.893574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.624 "name": "raid_bdev1", 00:12:05.624 "uuid": "d751a3da-9e90-47db-93c9-433af171425e", 00:12:05.624 "strip_size_kb": 64, 00:12:05.624 "state": "online", 00:12:05.624 "raid_level": "concat", 00:12:05.624 "superblock": true, 00:12:05.624 "num_base_bdevs": 4, 00:12:05.624 "num_base_bdevs_discovered": 4, 00:12:05.624 "num_base_bdevs_operational": 4, 00:12:05.624 "base_bdevs_list": [ 00:12:05.624 { 00:12:05.624 "name": "BaseBdev1", 00:12:05.624 "uuid": "bbd72555-1dd2-5999-acd1-e6b2bfe46b57", 00:12:05.624 "is_configured": true, 00:12:05.624 "data_offset": 2048, 00:12:05.624 "data_size": 63488 00:12:05.624 }, 00:12:05.624 { 00:12:05.624 "name": "BaseBdev2", 00:12:05.624 "uuid": "f3dd57b3-ebdf-50c5-8503-c1de5945c0fa", 00:12:05.624 "is_configured": true, 00:12:05.624 "data_offset": 2048, 00:12:05.624 "data_size": 63488 00:12:05.624 }, 00:12:05.624 { 00:12:05.624 "name": "BaseBdev3", 00:12:05.624 "uuid": "1362efaa-77a3-5130-8f76-eea363ce3bfc", 00:12:05.624 "is_configured": true, 00:12:05.624 "data_offset": 2048, 00:12:05.624 "data_size": 63488 00:12:05.624 }, 00:12:05.624 { 00:12:05.624 "name": "BaseBdev4", 00:12:05.624 "uuid": "56186a1e-f0ad-5974-b5c7-66d88b7ed065", 00:12:05.624 "is_configured": true, 00:12:05.624 "data_offset": 2048, 00:12:05.624 "data_size": 63488 00:12:05.624 } 00:12:05.624 ] 00:12:05.624 }' 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.624 01:32:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 [2024-11-17 01:32:14.257405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.888 [2024-11-17 01:32:14.257439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.888 [2024-11-17 01:32:14.260007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.888 [2024-11-17 01:32:14.260120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.888 [2024-11-17 01:32:14.260171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.888 [2024-11-17 01:32:14.260186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:05.888 { 00:12:05.888 "results": [ 00:12:05.888 { 00:12:05.888 "job": "raid_bdev1", 00:12:05.888 "core_mask": "0x1", 00:12:05.888 "workload": "randrw", 00:12:05.888 "percentage": 50, 00:12:05.888 "status": "finished", 00:12:05.888 "queue_depth": 1, 00:12:05.888 "io_size": 131072, 00:12:05.888 "runtime": 1.364613, 00:12:05.888 "iops": 16237.57065189911, 00:12:05.888 "mibps": 2029.6963314873888, 00:12:05.888 "io_failed": 1, 00:12:05.888 "io_timeout": 0, 00:12:05.888 "avg_latency_us": 85.68722525629082, 00:12:05.888 "min_latency_us": 24.817467248908297, 00:12:05.888 "max_latency_us": 1366.5257641921398 00:12:05.888 } 00:12:05.888 ], 00:12:05.888 "core_count": 1 00:12:05.888 } 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72805 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72805 ']' 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72805 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72805 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.888 killing process with pid 72805 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72805' 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72805 00:12:05.888 [2024-11-17 01:32:14.304491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.888 01:32:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72805 00:12:06.457 [2024-11-17 01:32:14.622360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EA31CNYVwE 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:07.395 ************************************ 00:12:07.395 END TEST raid_write_error_test 00:12:07.395 ************************************ 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:07.395 00:12:07.395 real 0m4.619s 00:12:07.395 user 0m5.453s 00:12:07.395 sys 0m0.580s 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.395 01:32:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.395 01:32:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:07.395 01:32:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:07.395 01:32:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:07.395 01:32:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.395 01:32:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.395 ************************************ 00:12:07.395 START TEST raid_state_function_test 00:12:07.395 ************************************ 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72949 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72949' 00:12:07.395 Process raid pid: 72949 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72949 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72949 ']' 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.395 01:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.654 [2024-11-17 01:32:15.906266] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:07.654 [2024-11-17 01:32:15.906501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.654 [2024-11-17 01:32:16.077528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.913 [2024-11-17 01:32:16.192832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.173 [2024-11-17 01:32:16.391102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.173 [2024-11-17 01:32:16.391146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.432 [2024-11-17 01:32:16.740935] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.432 [2024-11-17 01:32:16.740987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.432 [2024-11-17 01:32:16.740998] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.432 [2024-11-17 01:32:16.741007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.432 [2024-11-17 01:32:16.741013] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.432 [2024-11-17 01:32:16.741021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.432 [2024-11-17 01:32:16.741027] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.432 [2024-11-17 01:32:16.741035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.432 "name": "Existed_Raid", 00:12:08.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.432 "strip_size_kb": 0, 00:12:08.432 "state": "configuring", 00:12:08.432 "raid_level": "raid1", 00:12:08.432 "superblock": false, 00:12:08.432 "num_base_bdevs": 4, 00:12:08.432 "num_base_bdevs_discovered": 0, 00:12:08.432 "num_base_bdevs_operational": 4, 00:12:08.432 "base_bdevs_list": [ 00:12:08.432 { 00:12:08.432 "name": "BaseBdev1", 00:12:08.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.432 "is_configured": false, 00:12:08.432 "data_offset": 0, 00:12:08.432 "data_size": 0 00:12:08.432 }, 00:12:08.432 { 00:12:08.432 "name": "BaseBdev2", 00:12:08.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.432 "is_configured": false, 00:12:08.432 "data_offset": 0, 00:12:08.432 "data_size": 0 00:12:08.432 }, 00:12:08.432 { 00:12:08.432 "name": "BaseBdev3", 00:12:08.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.432 "is_configured": false, 00:12:08.432 "data_offset": 0, 00:12:08.432 "data_size": 0 00:12:08.432 }, 00:12:08.432 { 00:12:08.432 "name": "BaseBdev4", 00:12:08.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.432 "is_configured": false, 00:12:08.432 "data_offset": 0, 00:12:08.432 "data_size": 0 00:12:08.432 } 00:12:08.432 ] 00:12:08.432 }' 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.432 01:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.001 [2024-11-17 01:32:17.244025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.001 [2024-11-17 01:32:17.244119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.001 [2024-11-17 01:32:17.255991] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.001 [2024-11-17 01:32:17.256091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.001 [2024-11-17 01:32:17.256119] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.001 [2024-11-17 01:32:17.256142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.001 [2024-11-17 01:32:17.256161] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.001 [2024-11-17 01:32:17.256182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.001 [2024-11-17 01:32:17.256200] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.001 [2024-11-17 01:32:17.256230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.001 [2024-11-17 01:32:17.303411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.001 BaseBdev1 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.001 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 [ 00:12:09.002 { 00:12:09.002 "name": "BaseBdev1", 00:12:09.002 "aliases": [ 00:12:09.002 "25d483d9-9a9e-4958-beab-637f2c39878b" 00:12:09.002 ], 00:12:09.002 "product_name": "Malloc disk", 00:12:09.002 "block_size": 512, 00:12:09.002 "num_blocks": 65536, 00:12:09.002 "uuid": "25d483d9-9a9e-4958-beab-637f2c39878b", 00:12:09.002 "assigned_rate_limits": { 00:12:09.002 "rw_ios_per_sec": 0, 00:12:09.002 "rw_mbytes_per_sec": 0, 00:12:09.002 "r_mbytes_per_sec": 0, 00:12:09.002 "w_mbytes_per_sec": 0 00:12:09.002 }, 00:12:09.002 "claimed": true, 00:12:09.002 "claim_type": "exclusive_write", 00:12:09.002 "zoned": false, 00:12:09.002 "supported_io_types": { 00:12:09.002 "read": true, 00:12:09.002 "write": true, 00:12:09.002 "unmap": true, 00:12:09.002 "flush": true, 00:12:09.002 "reset": true, 00:12:09.002 "nvme_admin": false, 00:12:09.002 "nvme_io": false, 00:12:09.002 "nvme_io_md": false, 00:12:09.002 "write_zeroes": true, 00:12:09.002 "zcopy": true, 00:12:09.002 "get_zone_info": false, 00:12:09.002 "zone_management": false, 00:12:09.002 "zone_append": false, 00:12:09.002 "compare": false, 00:12:09.002 "compare_and_write": false, 00:12:09.002 "abort": true, 00:12:09.002 "seek_hole": false, 00:12:09.002 "seek_data": false, 00:12:09.002 "copy": true, 00:12:09.002 "nvme_iov_md": false 00:12:09.002 }, 00:12:09.002 "memory_domains": [ 00:12:09.002 { 00:12:09.002 "dma_device_id": "system", 00:12:09.002 "dma_device_type": 1 00:12:09.002 }, 00:12:09.002 { 00:12:09.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.002 "dma_device_type": 2 00:12:09.002 } 00:12:09.002 ], 00:12:09.002 "driver_specific": {} 00:12:09.002 } 00:12:09.002 ] 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.002 "name": "Existed_Raid", 00:12:09.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.002 "strip_size_kb": 0, 00:12:09.002 "state": "configuring", 00:12:09.002 "raid_level": "raid1", 00:12:09.002 "superblock": false, 00:12:09.002 "num_base_bdevs": 4, 00:12:09.002 "num_base_bdevs_discovered": 1, 00:12:09.002 "num_base_bdevs_operational": 4, 00:12:09.002 "base_bdevs_list": [ 00:12:09.002 { 00:12:09.002 "name": "BaseBdev1", 00:12:09.002 "uuid": "25d483d9-9a9e-4958-beab-637f2c39878b", 00:12:09.002 "is_configured": true, 00:12:09.002 "data_offset": 0, 00:12:09.002 "data_size": 65536 00:12:09.002 }, 00:12:09.002 { 00:12:09.002 "name": "BaseBdev2", 00:12:09.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.002 "is_configured": false, 00:12:09.002 "data_offset": 0, 00:12:09.002 "data_size": 0 00:12:09.002 }, 00:12:09.002 { 00:12:09.002 "name": "BaseBdev3", 00:12:09.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.002 "is_configured": false, 00:12:09.002 "data_offset": 0, 00:12:09.002 "data_size": 0 00:12:09.002 }, 00:12:09.002 { 00:12:09.002 "name": "BaseBdev4", 00:12:09.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.002 "is_configured": false, 00:12:09.002 "data_offset": 0, 00:12:09.002 "data_size": 0 00:12:09.002 } 00:12:09.002 ] 00:12:09.002 }' 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.002 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.571 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.571 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.571 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.571 [2024-11-17 01:32:17.738761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.571 [2024-11-17 01:32:17.738835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:09.571 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.571 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.571 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.571 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.571 [2024-11-17 01:32:17.750793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.571 [2024-11-17 01:32:17.752596] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.571 [2024-11-17 01:32:17.752643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.571 [2024-11-17 01:32:17.752653] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.571 [2024-11-17 01:32:17.752664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.571 [2024-11-17 01:32:17.752671] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.571 [2024-11-17 01:32:17.752679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.572 "name": "Existed_Raid", 00:12:09.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.572 "strip_size_kb": 0, 00:12:09.572 "state": "configuring", 00:12:09.572 "raid_level": "raid1", 00:12:09.572 "superblock": false, 00:12:09.572 "num_base_bdevs": 4, 00:12:09.572 "num_base_bdevs_discovered": 1, 00:12:09.572 "num_base_bdevs_operational": 4, 00:12:09.572 "base_bdevs_list": [ 00:12:09.572 { 00:12:09.572 "name": "BaseBdev1", 00:12:09.572 "uuid": "25d483d9-9a9e-4958-beab-637f2c39878b", 00:12:09.572 "is_configured": true, 00:12:09.572 "data_offset": 0, 00:12:09.572 "data_size": 65536 00:12:09.572 }, 00:12:09.572 { 00:12:09.572 "name": "BaseBdev2", 00:12:09.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.572 "is_configured": false, 00:12:09.572 "data_offset": 0, 00:12:09.572 "data_size": 0 00:12:09.572 }, 00:12:09.572 { 00:12:09.572 "name": "BaseBdev3", 00:12:09.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.572 "is_configured": false, 00:12:09.572 "data_offset": 0, 00:12:09.572 "data_size": 0 00:12:09.572 }, 00:12:09.572 { 00:12:09.572 "name": "BaseBdev4", 00:12:09.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.572 "is_configured": false, 00:12:09.572 "data_offset": 0, 00:12:09.572 "data_size": 0 00:12:09.572 } 00:12:09.572 ] 00:12:09.572 }' 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.572 01:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.831 [2024-11-17 01:32:18.249904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.831 BaseBdev2 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.831 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.832 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.832 [ 00:12:09.832 { 00:12:09.832 "name": "BaseBdev2", 00:12:09.832 "aliases": [ 00:12:09.832 "50c6e88c-8a6e-44b4-87ac-2ee3df00c3da" 00:12:09.832 ], 00:12:09.832 "product_name": "Malloc disk", 00:12:09.832 "block_size": 512, 00:12:09.832 "num_blocks": 65536, 00:12:09.832 "uuid": "50c6e88c-8a6e-44b4-87ac-2ee3df00c3da", 00:12:09.832 "assigned_rate_limits": { 00:12:09.832 "rw_ios_per_sec": 0, 00:12:09.832 "rw_mbytes_per_sec": 0, 00:12:09.832 "r_mbytes_per_sec": 0, 00:12:09.832 "w_mbytes_per_sec": 0 00:12:09.832 }, 00:12:09.832 "claimed": true, 00:12:09.832 "claim_type": "exclusive_write", 00:12:09.832 "zoned": false, 00:12:09.832 "supported_io_types": { 00:12:09.832 "read": true, 00:12:09.832 "write": true, 00:12:09.832 "unmap": true, 00:12:09.832 "flush": true, 00:12:09.832 "reset": true, 00:12:09.832 "nvme_admin": false, 00:12:09.832 "nvme_io": false, 00:12:09.832 "nvme_io_md": false, 00:12:09.832 "write_zeroes": true, 00:12:09.832 "zcopy": true, 00:12:09.832 "get_zone_info": false, 00:12:09.832 "zone_management": false, 00:12:09.832 "zone_append": false, 00:12:09.832 "compare": false, 00:12:09.832 "compare_and_write": false, 00:12:09.832 "abort": true, 00:12:09.832 "seek_hole": false, 00:12:09.832 "seek_data": false, 00:12:09.832 "copy": true, 00:12:09.832 "nvme_iov_md": false 00:12:09.832 }, 00:12:09.832 "memory_domains": [ 00:12:09.832 { 00:12:09.832 "dma_device_id": "system", 00:12:09.832 "dma_device_type": 1 00:12:09.832 }, 00:12:09.832 { 00:12:09.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.832 "dma_device_type": 2 00:12:09.832 } 00:12:09.832 ], 00:12:10.091 "driver_specific": {} 00:12:10.091 } 00:12:10.091 ] 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.091 "name": "Existed_Raid", 00:12:10.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.091 "strip_size_kb": 0, 00:12:10.091 "state": "configuring", 00:12:10.091 "raid_level": "raid1", 00:12:10.091 "superblock": false, 00:12:10.091 "num_base_bdevs": 4, 00:12:10.091 "num_base_bdevs_discovered": 2, 00:12:10.091 "num_base_bdevs_operational": 4, 00:12:10.091 "base_bdevs_list": [ 00:12:10.091 { 00:12:10.091 "name": "BaseBdev1", 00:12:10.091 "uuid": "25d483d9-9a9e-4958-beab-637f2c39878b", 00:12:10.091 "is_configured": true, 00:12:10.091 "data_offset": 0, 00:12:10.091 "data_size": 65536 00:12:10.091 }, 00:12:10.091 { 00:12:10.091 "name": "BaseBdev2", 00:12:10.091 "uuid": "50c6e88c-8a6e-44b4-87ac-2ee3df00c3da", 00:12:10.091 "is_configured": true, 00:12:10.091 "data_offset": 0, 00:12:10.091 "data_size": 65536 00:12:10.091 }, 00:12:10.091 { 00:12:10.091 "name": "BaseBdev3", 00:12:10.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.091 "is_configured": false, 00:12:10.091 "data_offset": 0, 00:12:10.091 "data_size": 0 00:12:10.091 }, 00:12:10.091 { 00:12:10.091 "name": "BaseBdev4", 00:12:10.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.091 "is_configured": false, 00:12:10.091 "data_offset": 0, 00:12:10.091 "data_size": 0 00:12:10.091 } 00:12:10.091 ] 00:12:10.091 }' 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.091 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.350 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.350 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.350 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.350 [2024-11-17 01:32:18.787905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.350 BaseBdev3 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.351 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.610 [ 00:12:10.610 { 00:12:10.610 "name": "BaseBdev3", 00:12:10.610 "aliases": [ 00:12:10.610 "729eeee9-be32-4904-bdd8-e1b67c6f7292" 00:12:10.610 ], 00:12:10.610 "product_name": "Malloc disk", 00:12:10.610 "block_size": 512, 00:12:10.610 "num_blocks": 65536, 00:12:10.610 "uuid": "729eeee9-be32-4904-bdd8-e1b67c6f7292", 00:12:10.610 "assigned_rate_limits": { 00:12:10.610 "rw_ios_per_sec": 0, 00:12:10.610 "rw_mbytes_per_sec": 0, 00:12:10.610 "r_mbytes_per_sec": 0, 00:12:10.610 "w_mbytes_per_sec": 0 00:12:10.610 }, 00:12:10.610 "claimed": true, 00:12:10.610 "claim_type": "exclusive_write", 00:12:10.610 "zoned": false, 00:12:10.610 "supported_io_types": { 00:12:10.610 "read": true, 00:12:10.610 "write": true, 00:12:10.610 "unmap": true, 00:12:10.610 "flush": true, 00:12:10.610 "reset": true, 00:12:10.610 "nvme_admin": false, 00:12:10.610 "nvme_io": false, 00:12:10.610 "nvme_io_md": false, 00:12:10.610 "write_zeroes": true, 00:12:10.610 "zcopy": true, 00:12:10.610 "get_zone_info": false, 00:12:10.610 "zone_management": false, 00:12:10.610 "zone_append": false, 00:12:10.610 "compare": false, 00:12:10.610 "compare_and_write": false, 00:12:10.610 "abort": true, 00:12:10.610 "seek_hole": false, 00:12:10.610 "seek_data": false, 00:12:10.610 "copy": true, 00:12:10.610 "nvme_iov_md": false 00:12:10.610 }, 00:12:10.610 "memory_domains": [ 00:12:10.610 { 00:12:10.610 "dma_device_id": "system", 00:12:10.610 "dma_device_type": 1 00:12:10.610 }, 00:12:10.610 { 00:12:10.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.610 "dma_device_type": 2 00:12:10.610 } 00:12:10.610 ], 00:12:10.610 "driver_specific": {} 00:12:10.610 } 00:12:10.610 ] 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.610 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.610 "name": "Existed_Raid", 00:12:10.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.610 "strip_size_kb": 0, 00:12:10.610 "state": "configuring", 00:12:10.610 "raid_level": "raid1", 00:12:10.610 "superblock": false, 00:12:10.610 "num_base_bdevs": 4, 00:12:10.610 "num_base_bdevs_discovered": 3, 00:12:10.610 "num_base_bdevs_operational": 4, 00:12:10.610 "base_bdevs_list": [ 00:12:10.610 { 00:12:10.610 "name": "BaseBdev1", 00:12:10.610 "uuid": "25d483d9-9a9e-4958-beab-637f2c39878b", 00:12:10.610 "is_configured": true, 00:12:10.610 "data_offset": 0, 00:12:10.610 "data_size": 65536 00:12:10.610 }, 00:12:10.610 { 00:12:10.610 "name": "BaseBdev2", 00:12:10.610 "uuid": "50c6e88c-8a6e-44b4-87ac-2ee3df00c3da", 00:12:10.610 "is_configured": true, 00:12:10.610 "data_offset": 0, 00:12:10.610 "data_size": 65536 00:12:10.610 }, 00:12:10.610 { 00:12:10.611 "name": "BaseBdev3", 00:12:10.611 "uuid": "729eeee9-be32-4904-bdd8-e1b67c6f7292", 00:12:10.611 "is_configured": true, 00:12:10.611 "data_offset": 0, 00:12:10.611 "data_size": 65536 00:12:10.611 }, 00:12:10.611 { 00:12:10.611 "name": "BaseBdev4", 00:12:10.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.611 "is_configured": false, 00:12:10.611 "data_offset": 0, 00:12:10.611 "data_size": 0 00:12:10.611 } 00:12:10.611 ] 00:12:10.611 }' 00:12:10.611 01:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.611 01:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 [2024-11-17 01:32:19.261239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.872 [2024-11-17 01:32:19.261365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:10.872 [2024-11-17 01:32:19.261391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.872 [2024-11-17 01:32:19.261709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:10.872 [2024-11-17 01:32:19.261936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:10.872 [2024-11-17 01:32:19.261984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:10.872 [2024-11-17 01:32:19.262287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.872 BaseBdev4 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.872 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 [ 00:12:10.872 { 00:12:10.872 "name": "BaseBdev4", 00:12:10.872 "aliases": [ 00:12:10.872 "3997850d-83c8-4fd2-bb4b-bcc68b78277b" 00:12:10.872 ], 00:12:10.872 "product_name": "Malloc disk", 00:12:10.872 "block_size": 512, 00:12:10.872 "num_blocks": 65536, 00:12:10.872 "uuid": "3997850d-83c8-4fd2-bb4b-bcc68b78277b", 00:12:10.872 "assigned_rate_limits": { 00:12:10.872 "rw_ios_per_sec": 0, 00:12:10.872 "rw_mbytes_per_sec": 0, 00:12:10.872 "r_mbytes_per_sec": 0, 00:12:10.872 "w_mbytes_per_sec": 0 00:12:10.872 }, 00:12:10.872 "claimed": true, 00:12:10.872 "claim_type": "exclusive_write", 00:12:10.872 "zoned": false, 00:12:10.873 "supported_io_types": { 00:12:10.873 "read": true, 00:12:10.873 "write": true, 00:12:10.873 "unmap": true, 00:12:10.873 "flush": true, 00:12:10.873 "reset": true, 00:12:10.873 "nvme_admin": false, 00:12:10.873 "nvme_io": false, 00:12:10.873 "nvme_io_md": false, 00:12:10.873 "write_zeroes": true, 00:12:10.873 "zcopy": true, 00:12:10.873 "get_zone_info": false, 00:12:10.873 "zone_management": false, 00:12:10.873 "zone_append": false, 00:12:10.873 "compare": false, 00:12:10.873 "compare_and_write": false, 00:12:10.873 "abort": true, 00:12:10.873 "seek_hole": false, 00:12:10.873 "seek_data": false, 00:12:10.873 "copy": true, 00:12:10.873 "nvme_iov_md": false 00:12:10.873 }, 00:12:10.873 "memory_domains": [ 00:12:10.873 { 00:12:10.873 "dma_device_id": "system", 00:12:10.873 "dma_device_type": 1 00:12:10.873 }, 00:12:10.873 { 00:12:10.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.873 "dma_device_type": 2 00:12:10.873 } 00:12:10.873 ], 00:12:10.873 "driver_specific": {} 00:12:10.873 } 00:12:10.873 ] 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.873 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.874 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.874 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.874 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.874 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.874 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.874 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.874 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.137 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.137 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.137 "name": "Existed_Raid", 00:12:11.137 "uuid": "e3e76580-fa53-489e-a45d-02b1610b32bb", 00:12:11.137 "strip_size_kb": 0, 00:12:11.137 "state": "online", 00:12:11.137 "raid_level": "raid1", 00:12:11.137 "superblock": false, 00:12:11.137 "num_base_bdevs": 4, 00:12:11.137 "num_base_bdevs_discovered": 4, 00:12:11.137 "num_base_bdevs_operational": 4, 00:12:11.137 "base_bdevs_list": [ 00:12:11.137 { 00:12:11.137 "name": "BaseBdev1", 00:12:11.137 "uuid": "25d483d9-9a9e-4958-beab-637f2c39878b", 00:12:11.137 "is_configured": true, 00:12:11.137 "data_offset": 0, 00:12:11.137 "data_size": 65536 00:12:11.137 }, 00:12:11.137 { 00:12:11.137 "name": "BaseBdev2", 00:12:11.137 "uuid": "50c6e88c-8a6e-44b4-87ac-2ee3df00c3da", 00:12:11.138 "is_configured": true, 00:12:11.138 "data_offset": 0, 00:12:11.138 "data_size": 65536 00:12:11.138 }, 00:12:11.138 { 00:12:11.138 "name": "BaseBdev3", 00:12:11.138 "uuid": "729eeee9-be32-4904-bdd8-e1b67c6f7292", 00:12:11.138 "is_configured": true, 00:12:11.138 "data_offset": 0, 00:12:11.138 "data_size": 65536 00:12:11.138 }, 00:12:11.138 { 00:12:11.138 "name": "BaseBdev4", 00:12:11.138 "uuid": "3997850d-83c8-4fd2-bb4b-bcc68b78277b", 00:12:11.138 "is_configured": true, 00:12:11.138 "data_offset": 0, 00:12:11.138 "data_size": 65536 00:12:11.138 } 00:12:11.138 ] 00:12:11.138 }' 00:12:11.138 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.138 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.397 [2024-11-17 01:32:19.748812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.397 "name": "Existed_Raid", 00:12:11.397 "aliases": [ 00:12:11.397 "e3e76580-fa53-489e-a45d-02b1610b32bb" 00:12:11.397 ], 00:12:11.397 "product_name": "Raid Volume", 00:12:11.397 "block_size": 512, 00:12:11.397 "num_blocks": 65536, 00:12:11.397 "uuid": "e3e76580-fa53-489e-a45d-02b1610b32bb", 00:12:11.397 "assigned_rate_limits": { 00:12:11.397 "rw_ios_per_sec": 0, 00:12:11.397 "rw_mbytes_per_sec": 0, 00:12:11.397 "r_mbytes_per_sec": 0, 00:12:11.397 "w_mbytes_per_sec": 0 00:12:11.397 }, 00:12:11.397 "claimed": false, 00:12:11.397 "zoned": false, 00:12:11.397 "supported_io_types": { 00:12:11.397 "read": true, 00:12:11.397 "write": true, 00:12:11.397 "unmap": false, 00:12:11.397 "flush": false, 00:12:11.397 "reset": true, 00:12:11.397 "nvme_admin": false, 00:12:11.397 "nvme_io": false, 00:12:11.397 "nvme_io_md": false, 00:12:11.397 "write_zeroes": true, 00:12:11.397 "zcopy": false, 00:12:11.397 "get_zone_info": false, 00:12:11.397 "zone_management": false, 00:12:11.397 "zone_append": false, 00:12:11.397 "compare": false, 00:12:11.397 "compare_and_write": false, 00:12:11.397 "abort": false, 00:12:11.397 "seek_hole": false, 00:12:11.397 "seek_data": false, 00:12:11.397 "copy": false, 00:12:11.397 "nvme_iov_md": false 00:12:11.397 }, 00:12:11.397 "memory_domains": [ 00:12:11.397 { 00:12:11.397 "dma_device_id": "system", 00:12:11.397 "dma_device_type": 1 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.397 "dma_device_type": 2 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "dma_device_id": "system", 00:12:11.397 "dma_device_type": 1 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.397 "dma_device_type": 2 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "dma_device_id": "system", 00:12:11.397 "dma_device_type": 1 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.397 "dma_device_type": 2 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "dma_device_id": "system", 00:12:11.397 "dma_device_type": 1 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.397 "dma_device_type": 2 00:12:11.397 } 00:12:11.397 ], 00:12:11.397 "driver_specific": { 00:12:11.397 "raid": { 00:12:11.397 "uuid": "e3e76580-fa53-489e-a45d-02b1610b32bb", 00:12:11.397 "strip_size_kb": 0, 00:12:11.397 "state": "online", 00:12:11.397 "raid_level": "raid1", 00:12:11.397 "superblock": false, 00:12:11.397 "num_base_bdevs": 4, 00:12:11.397 "num_base_bdevs_discovered": 4, 00:12:11.397 "num_base_bdevs_operational": 4, 00:12:11.397 "base_bdevs_list": [ 00:12:11.397 { 00:12:11.397 "name": "BaseBdev1", 00:12:11.397 "uuid": "25d483d9-9a9e-4958-beab-637f2c39878b", 00:12:11.397 "is_configured": true, 00:12:11.397 "data_offset": 0, 00:12:11.397 "data_size": 65536 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "name": "BaseBdev2", 00:12:11.397 "uuid": "50c6e88c-8a6e-44b4-87ac-2ee3df00c3da", 00:12:11.397 "is_configured": true, 00:12:11.397 "data_offset": 0, 00:12:11.397 "data_size": 65536 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "name": "BaseBdev3", 00:12:11.397 "uuid": "729eeee9-be32-4904-bdd8-e1b67c6f7292", 00:12:11.397 "is_configured": true, 00:12:11.397 "data_offset": 0, 00:12:11.397 "data_size": 65536 00:12:11.397 }, 00:12:11.397 { 00:12:11.397 "name": "BaseBdev4", 00:12:11.397 "uuid": "3997850d-83c8-4fd2-bb4b-bcc68b78277b", 00:12:11.397 "is_configured": true, 00:12:11.397 "data_offset": 0, 00:12:11.397 "data_size": 65536 00:12:11.397 } 00:12:11.397 ] 00:12:11.397 } 00:12:11.397 } 00:12:11.397 }' 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:11.397 BaseBdev2 00:12:11.397 BaseBdev3 00:12:11.397 BaseBdev4' 00:12:11.397 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.657 01:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.657 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.657 [2024-11-17 01:32:20.083972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.916 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.916 "name": "Existed_Raid", 00:12:11.916 "uuid": "e3e76580-fa53-489e-a45d-02b1610b32bb", 00:12:11.916 "strip_size_kb": 0, 00:12:11.916 "state": "online", 00:12:11.916 "raid_level": "raid1", 00:12:11.916 "superblock": false, 00:12:11.916 "num_base_bdevs": 4, 00:12:11.916 "num_base_bdevs_discovered": 3, 00:12:11.916 "num_base_bdevs_operational": 3, 00:12:11.916 "base_bdevs_list": [ 00:12:11.916 { 00:12:11.916 "name": null, 00:12:11.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.916 "is_configured": false, 00:12:11.916 "data_offset": 0, 00:12:11.916 "data_size": 65536 00:12:11.916 }, 00:12:11.916 { 00:12:11.916 "name": "BaseBdev2", 00:12:11.916 "uuid": "50c6e88c-8a6e-44b4-87ac-2ee3df00c3da", 00:12:11.916 "is_configured": true, 00:12:11.916 "data_offset": 0, 00:12:11.916 "data_size": 65536 00:12:11.916 }, 00:12:11.916 { 00:12:11.916 "name": "BaseBdev3", 00:12:11.916 "uuid": "729eeee9-be32-4904-bdd8-e1b67c6f7292", 00:12:11.916 "is_configured": true, 00:12:11.916 "data_offset": 0, 00:12:11.916 "data_size": 65536 00:12:11.916 }, 00:12:11.916 { 00:12:11.916 "name": "BaseBdev4", 00:12:11.916 "uuid": "3997850d-83c8-4fd2-bb4b-bcc68b78277b", 00:12:11.917 "is_configured": true, 00:12:11.917 "data_offset": 0, 00:12:11.917 "data_size": 65536 00:12:11.917 } 00:12:11.917 ] 00:12:11.917 }' 00:12:11.917 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.917 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.176 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:12.176 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.176 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.176 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.176 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.176 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.435 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.436 [2024-11-17 01:32:20.681333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.436 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.436 [2024-11-17 01:32:20.831030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 01:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 [2024-11-17 01:32:20.980648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:12.710 [2024-11-17 01:32:20.980740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.710 [2024-11-17 01:32:21.075240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.710 [2024-11-17 01:32:21.075297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.710 [2024-11-17 01:32:21.075310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.976 BaseBdev2 00:12:12.976 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.976 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:12.976 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:12.976 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.976 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.976 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 [ 00:12:12.977 { 00:12:12.977 "name": "BaseBdev2", 00:12:12.977 "aliases": [ 00:12:12.977 "fd9c865c-871c-42ec-aedb-d41b6d16c995" 00:12:12.977 ], 00:12:12.977 "product_name": "Malloc disk", 00:12:12.977 "block_size": 512, 00:12:12.977 "num_blocks": 65536, 00:12:12.977 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:12.977 "assigned_rate_limits": { 00:12:12.977 "rw_ios_per_sec": 0, 00:12:12.977 "rw_mbytes_per_sec": 0, 00:12:12.977 "r_mbytes_per_sec": 0, 00:12:12.977 "w_mbytes_per_sec": 0 00:12:12.977 }, 00:12:12.977 "claimed": false, 00:12:12.977 "zoned": false, 00:12:12.977 "supported_io_types": { 00:12:12.977 "read": true, 00:12:12.977 "write": true, 00:12:12.977 "unmap": true, 00:12:12.977 "flush": true, 00:12:12.977 "reset": true, 00:12:12.977 "nvme_admin": false, 00:12:12.977 "nvme_io": false, 00:12:12.977 "nvme_io_md": false, 00:12:12.977 "write_zeroes": true, 00:12:12.977 "zcopy": true, 00:12:12.977 "get_zone_info": false, 00:12:12.977 "zone_management": false, 00:12:12.977 "zone_append": false, 00:12:12.977 "compare": false, 00:12:12.977 "compare_and_write": false, 00:12:12.977 "abort": true, 00:12:12.977 "seek_hole": false, 00:12:12.977 "seek_data": false, 00:12:12.977 "copy": true, 00:12:12.977 "nvme_iov_md": false 00:12:12.977 }, 00:12:12.977 "memory_domains": [ 00:12:12.977 { 00:12:12.977 "dma_device_id": "system", 00:12:12.977 "dma_device_type": 1 00:12:12.977 }, 00:12:12.977 { 00:12:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.977 "dma_device_type": 2 00:12:12.977 } 00:12:12.977 ], 00:12:12.977 "driver_specific": {} 00:12:12.977 } 00:12:12.977 ] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 BaseBdev3 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 [ 00:12:12.977 { 00:12:12.977 "name": "BaseBdev3", 00:12:12.977 "aliases": [ 00:12:12.977 "51a7a8da-f084-4012-93ee-b71ee43c7fc1" 00:12:12.977 ], 00:12:12.977 "product_name": "Malloc disk", 00:12:12.977 "block_size": 512, 00:12:12.977 "num_blocks": 65536, 00:12:12.977 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:12.977 "assigned_rate_limits": { 00:12:12.977 "rw_ios_per_sec": 0, 00:12:12.977 "rw_mbytes_per_sec": 0, 00:12:12.977 "r_mbytes_per_sec": 0, 00:12:12.977 "w_mbytes_per_sec": 0 00:12:12.977 }, 00:12:12.977 "claimed": false, 00:12:12.977 "zoned": false, 00:12:12.977 "supported_io_types": { 00:12:12.977 "read": true, 00:12:12.977 "write": true, 00:12:12.977 "unmap": true, 00:12:12.977 "flush": true, 00:12:12.977 "reset": true, 00:12:12.977 "nvme_admin": false, 00:12:12.977 "nvme_io": false, 00:12:12.977 "nvme_io_md": false, 00:12:12.977 "write_zeroes": true, 00:12:12.977 "zcopy": true, 00:12:12.977 "get_zone_info": false, 00:12:12.977 "zone_management": false, 00:12:12.977 "zone_append": false, 00:12:12.977 "compare": false, 00:12:12.977 "compare_and_write": false, 00:12:12.977 "abort": true, 00:12:12.977 "seek_hole": false, 00:12:12.977 "seek_data": false, 00:12:12.977 "copy": true, 00:12:12.977 "nvme_iov_md": false 00:12:12.977 }, 00:12:12.977 "memory_domains": [ 00:12:12.977 { 00:12:12.977 "dma_device_id": "system", 00:12:12.977 "dma_device_type": 1 00:12:12.977 }, 00:12:12.977 { 00:12:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.977 "dma_device_type": 2 00:12:12.977 } 00:12:12.977 ], 00:12:12.977 "driver_specific": {} 00:12:12.977 } 00:12:12.977 ] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 BaseBdev4 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.977 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.977 [ 00:12:12.977 { 00:12:12.977 "name": "BaseBdev4", 00:12:12.977 "aliases": [ 00:12:12.977 "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343" 00:12:12.977 ], 00:12:12.977 "product_name": "Malloc disk", 00:12:12.977 "block_size": 512, 00:12:12.977 "num_blocks": 65536, 00:12:12.977 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:12.977 "assigned_rate_limits": { 00:12:12.977 "rw_ios_per_sec": 0, 00:12:12.977 "rw_mbytes_per_sec": 0, 00:12:12.977 "r_mbytes_per_sec": 0, 00:12:12.977 "w_mbytes_per_sec": 0 00:12:12.977 }, 00:12:12.977 "claimed": false, 00:12:12.977 "zoned": false, 00:12:12.977 "supported_io_types": { 00:12:12.977 "read": true, 00:12:12.977 "write": true, 00:12:12.977 "unmap": true, 00:12:12.977 "flush": true, 00:12:12.977 "reset": true, 00:12:12.977 "nvme_admin": false, 00:12:12.977 "nvme_io": false, 00:12:12.977 "nvme_io_md": false, 00:12:12.977 "write_zeroes": true, 00:12:12.977 "zcopy": true, 00:12:12.977 "get_zone_info": false, 00:12:12.977 "zone_management": false, 00:12:12.977 "zone_append": false, 00:12:12.977 "compare": false, 00:12:12.977 "compare_and_write": false, 00:12:12.977 "abort": true, 00:12:12.977 "seek_hole": false, 00:12:12.977 "seek_data": false, 00:12:12.977 "copy": true, 00:12:12.977 "nvme_iov_md": false 00:12:12.977 }, 00:12:12.977 "memory_domains": [ 00:12:12.977 { 00:12:12.977 "dma_device_id": "system", 00:12:12.977 "dma_device_type": 1 00:12:12.977 }, 00:12:12.977 { 00:12:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.978 "dma_device_type": 2 00:12:12.978 } 00:12:12.978 ], 00:12:12.978 "driver_specific": {} 00:12:12.978 } 00:12:12.978 ] 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.978 [2024-11-17 01:32:21.367830] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.978 [2024-11-17 01:32:21.367921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.978 [2024-11-17 01:32:21.367969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.978 [2024-11-17 01:32:21.369817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.978 [2024-11-17 01:32:21.369904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.978 "name": "Existed_Raid", 00:12:12.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.978 "strip_size_kb": 0, 00:12:12.978 "state": "configuring", 00:12:12.978 "raid_level": "raid1", 00:12:12.978 "superblock": false, 00:12:12.978 "num_base_bdevs": 4, 00:12:12.978 "num_base_bdevs_discovered": 3, 00:12:12.978 "num_base_bdevs_operational": 4, 00:12:12.978 "base_bdevs_list": [ 00:12:12.978 { 00:12:12.978 "name": "BaseBdev1", 00:12:12.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.978 "is_configured": false, 00:12:12.978 "data_offset": 0, 00:12:12.978 "data_size": 0 00:12:12.978 }, 00:12:12.978 { 00:12:12.978 "name": "BaseBdev2", 00:12:12.978 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:12.978 "is_configured": true, 00:12:12.978 "data_offset": 0, 00:12:12.978 "data_size": 65536 00:12:12.978 }, 00:12:12.978 { 00:12:12.978 "name": "BaseBdev3", 00:12:12.978 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:12.978 "is_configured": true, 00:12:12.978 "data_offset": 0, 00:12:12.978 "data_size": 65536 00:12:12.978 }, 00:12:12.978 { 00:12:12.978 "name": "BaseBdev4", 00:12:12.978 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:12.978 "is_configured": true, 00:12:12.978 "data_offset": 0, 00:12:12.978 "data_size": 65536 00:12:12.978 } 00:12:12.978 ] 00:12:12.978 }' 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.978 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.547 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:13.547 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.547 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.547 [2024-11-17 01:32:21.787158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.547 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.547 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.548 "name": "Existed_Raid", 00:12:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.548 "strip_size_kb": 0, 00:12:13.548 "state": "configuring", 00:12:13.548 "raid_level": "raid1", 00:12:13.548 "superblock": false, 00:12:13.548 "num_base_bdevs": 4, 00:12:13.548 "num_base_bdevs_discovered": 2, 00:12:13.548 "num_base_bdevs_operational": 4, 00:12:13.548 "base_bdevs_list": [ 00:12:13.548 { 00:12:13.548 "name": "BaseBdev1", 00:12:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.548 "is_configured": false, 00:12:13.548 "data_offset": 0, 00:12:13.548 "data_size": 0 00:12:13.548 }, 00:12:13.548 { 00:12:13.548 "name": null, 00:12:13.548 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:13.548 "is_configured": false, 00:12:13.548 "data_offset": 0, 00:12:13.548 "data_size": 65536 00:12:13.548 }, 00:12:13.548 { 00:12:13.548 "name": "BaseBdev3", 00:12:13.548 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:13.548 "is_configured": true, 00:12:13.548 "data_offset": 0, 00:12:13.548 "data_size": 65536 00:12:13.548 }, 00:12:13.548 { 00:12:13.548 "name": "BaseBdev4", 00:12:13.548 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:13.548 "is_configured": true, 00:12:13.548 "data_offset": 0, 00:12:13.548 "data_size": 65536 00:12:13.548 } 00:12:13.548 ] 00:12:13.548 }' 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.548 01:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.807 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.807 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:13.807 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.807 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.067 [2024-11-17 01:32:22.345484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.067 BaseBdev1 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.067 [ 00:12:14.067 { 00:12:14.067 "name": "BaseBdev1", 00:12:14.067 "aliases": [ 00:12:14.067 "fa7a3b03-d050-4834-9e6f-14505caad081" 00:12:14.067 ], 00:12:14.067 "product_name": "Malloc disk", 00:12:14.067 "block_size": 512, 00:12:14.067 "num_blocks": 65536, 00:12:14.067 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:14.067 "assigned_rate_limits": { 00:12:14.067 "rw_ios_per_sec": 0, 00:12:14.067 "rw_mbytes_per_sec": 0, 00:12:14.067 "r_mbytes_per_sec": 0, 00:12:14.067 "w_mbytes_per_sec": 0 00:12:14.067 }, 00:12:14.067 "claimed": true, 00:12:14.067 "claim_type": "exclusive_write", 00:12:14.067 "zoned": false, 00:12:14.067 "supported_io_types": { 00:12:14.067 "read": true, 00:12:14.067 "write": true, 00:12:14.067 "unmap": true, 00:12:14.067 "flush": true, 00:12:14.067 "reset": true, 00:12:14.067 "nvme_admin": false, 00:12:14.067 "nvme_io": false, 00:12:14.067 "nvme_io_md": false, 00:12:14.067 "write_zeroes": true, 00:12:14.067 "zcopy": true, 00:12:14.067 "get_zone_info": false, 00:12:14.067 "zone_management": false, 00:12:14.067 "zone_append": false, 00:12:14.067 "compare": false, 00:12:14.067 "compare_and_write": false, 00:12:14.067 "abort": true, 00:12:14.067 "seek_hole": false, 00:12:14.067 "seek_data": false, 00:12:14.067 "copy": true, 00:12:14.067 "nvme_iov_md": false 00:12:14.067 }, 00:12:14.067 "memory_domains": [ 00:12:14.067 { 00:12:14.067 "dma_device_id": "system", 00:12:14.067 "dma_device_type": 1 00:12:14.067 }, 00:12:14.067 { 00:12:14.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.067 "dma_device_type": 2 00:12:14.067 } 00:12:14.067 ], 00:12:14.067 "driver_specific": {} 00:12:14.067 } 00:12:14.067 ] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.067 "name": "Existed_Raid", 00:12:14.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.067 "strip_size_kb": 0, 00:12:14.067 "state": "configuring", 00:12:14.067 "raid_level": "raid1", 00:12:14.067 "superblock": false, 00:12:14.067 "num_base_bdevs": 4, 00:12:14.067 "num_base_bdevs_discovered": 3, 00:12:14.067 "num_base_bdevs_operational": 4, 00:12:14.067 "base_bdevs_list": [ 00:12:14.067 { 00:12:14.067 "name": "BaseBdev1", 00:12:14.067 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:14.067 "is_configured": true, 00:12:14.067 "data_offset": 0, 00:12:14.067 "data_size": 65536 00:12:14.067 }, 00:12:14.067 { 00:12:14.067 "name": null, 00:12:14.067 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:14.067 "is_configured": false, 00:12:14.067 "data_offset": 0, 00:12:14.067 "data_size": 65536 00:12:14.067 }, 00:12:14.067 { 00:12:14.067 "name": "BaseBdev3", 00:12:14.067 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:14.067 "is_configured": true, 00:12:14.067 "data_offset": 0, 00:12:14.067 "data_size": 65536 00:12:14.067 }, 00:12:14.067 { 00:12:14.067 "name": "BaseBdev4", 00:12:14.067 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:14.067 "is_configured": true, 00:12:14.067 "data_offset": 0, 00:12:14.067 "data_size": 65536 00:12:14.067 } 00:12:14.067 ] 00:12:14.067 }' 00:12:14.067 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.068 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.637 [2024-11-17 01:32:22.872672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.637 "name": "Existed_Raid", 00:12:14.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.637 "strip_size_kb": 0, 00:12:14.637 "state": "configuring", 00:12:14.637 "raid_level": "raid1", 00:12:14.637 "superblock": false, 00:12:14.637 "num_base_bdevs": 4, 00:12:14.637 "num_base_bdevs_discovered": 2, 00:12:14.637 "num_base_bdevs_operational": 4, 00:12:14.637 "base_bdevs_list": [ 00:12:14.637 { 00:12:14.637 "name": "BaseBdev1", 00:12:14.637 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:14.637 "is_configured": true, 00:12:14.637 "data_offset": 0, 00:12:14.637 "data_size": 65536 00:12:14.637 }, 00:12:14.637 { 00:12:14.637 "name": null, 00:12:14.637 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:14.637 "is_configured": false, 00:12:14.637 "data_offset": 0, 00:12:14.637 "data_size": 65536 00:12:14.637 }, 00:12:14.637 { 00:12:14.637 "name": null, 00:12:14.637 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:14.637 "is_configured": false, 00:12:14.637 "data_offset": 0, 00:12:14.637 "data_size": 65536 00:12:14.637 }, 00:12:14.637 { 00:12:14.637 "name": "BaseBdev4", 00:12:14.637 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:14.637 "is_configured": true, 00:12:14.637 "data_offset": 0, 00:12:14.637 "data_size": 65536 00:12:14.637 } 00:12:14.637 ] 00:12:14.637 }' 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.637 01:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.896 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.896 [2024-11-17 01:32:23.351852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.155 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.156 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.156 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.156 "name": "Existed_Raid", 00:12:15.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.156 "strip_size_kb": 0, 00:12:15.156 "state": "configuring", 00:12:15.156 "raid_level": "raid1", 00:12:15.156 "superblock": false, 00:12:15.156 "num_base_bdevs": 4, 00:12:15.156 "num_base_bdevs_discovered": 3, 00:12:15.156 "num_base_bdevs_operational": 4, 00:12:15.156 "base_bdevs_list": [ 00:12:15.156 { 00:12:15.156 "name": "BaseBdev1", 00:12:15.156 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:15.156 "is_configured": true, 00:12:15.156 "data_offset": 0, 00:12:15.156 "data_size": 65536 00:12:15.156 }, 00:12:15.156 { 00:12:15.156 "name": null, 00:12:15.156 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:15.156 "is_configured": false, 00:12:15.156 "data_offset": 0, 00:12:15.156 "data_size": 65536 00:12:15.156 }, 00:12:15.156 { 00:12:15.156 "name": "BaseBdev3", 00:12:15.156 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:15.156 "is_configured": true, 00:12:15.156 "data_offset": 0, 00:12:15.156 "data_size": 65536 00:12:15.156 }, 00:12:15.156 { 00:12:15.156 "name": "BaseBdev4", 00:12:15.156 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:15.156 "is_configured": true, 00:12:15.156 "data_offset": 0, 00:12:15.156 "data_size": 65536 00:12:15.156 } 00:12:15.156 ] 00:12:15.156 }' 00:12:15.156 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.156 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.415 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.415 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:15.415 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.415 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.415 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 [2024-11-17 01:32:23.882982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.674 01:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.674 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.674 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.674 "name": "Existed_Raid", 00:12:15.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.674 "strip_size_kb": 0, 00:12:15.674 "state": "configuring", 00:12:15.674 "raid_level": "raid1", 00:12:15.674 "superblock": false, 00:12:15.674 "num_base_bdevs": 4, 00:12:15.674 "num_base_bdevs_discovered": 2, 00:12:15.674 "num_base_bdevs_operational": 4, 00:12:15.674 "base_bdevs_list": [ 00:12:15.674 { 00:12:15.674 "name": null, 00:12:15.674 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:15.674 "is_configured": false, 00:12:15.674 "data_offset": 0, 00:12:15.674 "data_size": 65536 00:12:15.674 }, 00:12:15.675 { 00:12:15.675 "name": null, 00:12:15.675 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:15.675 "is_configured": false, 00:12:15.675 "data_offset": 0, 00:12:15.675 "data_size": 65536 00:12:15.675 }, 00:12:15.675 { 00:12:15.675 "name": "BaseBdev3", 00:12:15.675 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:15.675 "is_configured": true, 00:12:15.675 "data_offset": 0, 00:12:15.675 "data_size": 65536 00:12:15.675 }, 00:12:15.675 { 00:12:15.675 "name": "BaseBdev4", 00:12:15.675 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:15.675 "is_configured": true, 00:12:15.675 "data_offset": 0, 00:12:15.675 "data_size": 65536 00:12:15.675 } 00:12:15.675 ] 00:12:15.675 }' 00:12:15.675 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.675 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.244 [2024-11-17 01:32:24.470854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.244 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.245 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.245 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.245 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.245 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.245 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.245 "name": "Existed_Raid", 00:12:16.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.245 "strip_size_kb": 0, 00:12:16.245 "state": "configuring", 00:12:16.245 "raid_level": "raid1", 00:12:16.245 "superblock": false, 00:12:16.245 "num_base_bdevs": 4, 00:12:16.245 "num_base_bdevs_discovered": 3, 00:12:16.245 "num_base_bdevs_operational": 4, 00:12:16.245 "base_bdevs_list": [ 00:12:16.245 { 00:12:16.245 "name": null, 00:12:16.245 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:16.245 "is_configured": false, 00:12:16.245 "data_offset": 0, 00:12:16.245 "data_size": 65536 00:12:16.245 }, 00:12:16.245 { 00:12:16.245 "name": "BaseBdev2", 00:12:16.245 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:16.245 "is_configured": true, 00:12:16.245 "data_offset": 0, 00:12:16.245 "data_size": 65536 00:12:16.245 }, 00:12:16.245 { 00:12:16.245 "name": "BaseBdev3", 00:12:16.245 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:16.245 "is_configured": true, 00:12:16.245 "data_offset": 0, 00:12:16.245 "data_size": 65536 00:12:16.245 }, 00:12:16.245 { 00:12:16.245 "name": "BaseBdev4", 00:12:16.245 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:16.245 "is_configured": true, 00:12:16.245 "data_offset": 0, 00:12:16.245 "data_size": 65536 00:12:16.245 } 00:12:16.245 ] 00:12:16.245 }' 00:12:16.245 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.245 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fa7a3b03-d050-4834-9e6f-14505caad081 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.504 01:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.764 [2024-11-17 01:32:24.998823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:16.764 [2024-11-17 01:32:24.998915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:16.764 [2024-11-17 01:32:24.998942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:16.764 [2024-11-17 01:32:24.999237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:16.764 [2024-11-17 01:32:24.999438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:16.764 [2024-11-17 01:32:24.999497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:16.764 [2024-11-17 01:32:24.999805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.764 NewBaseBdev 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.764 [ 00:12:16.764 { 00:12:16.764 "name": "NewBaseBdev", 00:12:16.764 "aliases": [ 00:12:16.764 "fa7a3b03-d050-4834-9e6f-14505caad081" 00:12:16.764 ], 00:12:16.764 "product_name": "Malloc disk", 00:12:16.764 "block_size": 512, 00:12:16.764 "num_blocks": 65536, 00:12:16.764 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:16.764 "assigned_rate_limits": { 00:12:16.764 "rw_ios_per_sec": 0, 00:12:16.764 "rw_mbytes_per_sec": 0, 00:12:16.764 "r_mbytes_per_sec": 0, 00:12:16.764 "w_mbytes_per_sec": 0 00:12:16.764 }, 00:12:16.764 "claimed": true, 00:12:16.764 "claim_type": "exclusive_write", 00:12:16.764 "zoned": false, 00:12:16.764 "supported_io_types": { 00:12:16.764 "read": true, 00:12:16.764 "write": true, 00:12:16.764 "unmap": true, 00:12:16.764 "flush": true, 00:12:16.764 "reset": true, 00:12:16.764 "nvme_admin": false, 00:12:16.764 "nvme_io": false, 00:12:16.764 "nvme_io_md": false, 00:12:16.764 "write_zeroes": true, 00:12:16.764 "zcopy": true, 00:12:16.764 "get_zone_info": false, 00:12:16.764 "zone_management": false, 00:12:16.764 "zone_append": false, 00:12:16.764 "compare": false, 00:12:16.764 "compare_and_write": false, 00:12:16.764 "abort": true, 00:12:16.764 "seek_hole": false, 00:12:16.764 "seek_data": false, 00:12:16.764 "copy": true, 00:12:16.764 "nvme_iov_md": false 00:12:16.764 }, 00:12:16.764 "memory_domains": [ 00:12:16.764 { 00:12:16.764 "dma_device_id": "system", 00:12:16.764 "dma_device_type": 1 00:12:16.764 }, 00:12:16.764 { 00:12:16.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.764 "dma_device_type": 2 00:12:16.764 } 00:12:16.764 ], 00:12:16.764 "driver_specific": {} 00:12:16.764 } 00:12:16.764 ] 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.764 "name": "Existed_Raid", 00:12:16.764 "uuid": "95e72df6-ed62-46f0-9246-d4ee161a6d07", 00:12:16.764 "strip_size_kb": 0, 00:12:16.764 "state": "online", 00:12:16.764 "raid_level": "raid1", 00:12:16.764 "superblock": false, 00:12:16.764 "num_base_bdevs": 4, 00:12:16.764 "num_base_bdevs_discovered": 4, 00:12:16.764 "num_base_bdevs_operational": 4, 00:12:16.764 "base_bdevs_list": [ 00:12:16.764 { 00:12:16.764 "name": "NewBaseBdev", 00:12:16.764 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:16.764 "is_configured": true, 00:12:16.764 "data_offset": 0, 00:12:16.764 "data_size": 65536 00:12:16.764 }, 00:12:16.764 { 00:12:16.764 "name": "BaseBdev2", 00:12:16.764 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:16.764 "is_configured": true, 00:12:16.764 "data_offset": 0, 00:12:16.764 "data_size": 65536 00:12:16.764 }, 00:12:16.764 { 00:12:16.764 "name": "BaseBdev3", 00:12:16.764 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:16.764 "is_configured": true, 00:12:16.764 "data_offset": 0, 00:12:16.764 "data_size": 65536 00:12:16.764 }, 00:12:16.764 { 00:12:16.764 "name": "BaseBdev4", 00:12:16.764 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:16.764 "is_configured": true, 00:12:16.764 "data_offset": 0, 00:12:16.764 "data_size": 65536 00:12:16.764 } 00:12:16.764 ] 00:12:16.764 }' 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.764 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.023 [2024-11-17 01:32:25.446450] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.023 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.023 "name": "Existed_Raid", 00:12:17.023 "aliases": [ 00:12:17.023 "95e72df6-ed62-46f0-9246-d4ee161a6d07" 00:12:17.023 ], 00:12:17.023 "product_name": "Raid Volume", 00:12:17.023 "block_size": 512, 00:12:17.023 "num_blocks": 65536, 00:12:17.023 "uuid": "95e72df6-ed62-46f0-9246-d4ee161a6d07", 00:12:17.023 "assigned_rate_limits": { 00:12:17.023 "rw_ios_per_sec": 0, 00:12:17.023 "rw_mbytes_per_sec": 0, 00:12:17.023 "r_mbytes_per_sec": 0, 00:12:17.023 "w_mbytes_per_sec": 0 00:12:17.023 }, 00:12:17.023 "claimed": false, 00:12:17.023 "zoned": false, 00:12:17.023 "supported_io_types": { 00:12:17.023 "read": true, 00:12:17.023 "write": true, 00:12:17.023 "unmap": false, 00:12:17.023 "flush": false, 00:12:17.023 "reset": true, 00:12:17.023 "nvme_admin": false, 00:12:17.024 "nvme_io": false, 00:12:17.024 "nvme_io_md": false, 00:12:17.024 "write_zeroes": true, 00:12:17.024 "zcopy": false, 00:12:17.024 "get_zone_info": false, 00:12:17.024 "zone_management": false, 00:12:17.024 "zone_append": false, 00:12:17.024 "compare": false, 00:12:17.024 "compare_and_write": false, 00:12:17.024 "abort": false, 00:12:17.024 "seek_hole": false, 00:12:17.024 "seek_data": false, 00:12:17.024 "copy": false, 00:12:17.024 "nvme_iov_md": false 00:12:17.024 }, 00:12:17.024 "memory_domains": [ 00:12:17.024 { 00:12:17.024 "dma_device_id": "system", 00:12:17.024 "dma_device_type": 1 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.024 "dma_device_type": 2 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "dma_device_id": "system", 00:12:17.024 "dma_device_type": 1 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.024 "dma_device_type": 2 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "dma_device_id": "system", 00:12:17.024 "dma_device_type": 1 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.024 "dma_device_type": 2 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "dma_device_id": "system", 00:12:17.024 "dma_device_type": 1 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.024 "dma_device_type": 2 00:12:17.024 } 00:12:17.024 ], 00:12:17.024 "driver_specific": { 00:12:17.024 "raid": { 00:12:17.024 "uuid": "95e72df6-ed62-46f0-9246-d4ee161a6d07", 00:12:17.024 "strip_size_kb": 0, 00:12:17.024 "state": "online", 00:12:17.024 "raid_level": "raid1", 00:12:17.024 "superblock": false, 00:12:17.024 "num_base_bdevs": 4, 00:12:17.024 "num_base_bdevs_discovered": 4, 00:12:17.024 "num_base_bdevs_operational": 4, 00:12:17.024 "base_bdevs_list": [ 00:12:17.024 { 00:12:17.024 "name": "NewBaseBdev", 00:12:17.024 "uuid": "fa7a3b03-d050-4834-9e6f-14505caad081", 00:12:17.024 "is_configured": true, 00:12:17.024 "data_offset": 0, 00:12:17.024 "data_size": 65536 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "name": "BaseBdev2", 00:12:17.024 "uuid": "fd9c865c-871c-42ec-aedb-d41b6d16c995", 00:12:17.024 "is_configured": true, 00:12:17.024 "data_offset": 0, 00:12:17.024 "data_size": 65536 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "name": "BaseBdev3", 00:12:17.024 "uuid": "51a7a8da-f084-4012-93ee-b71ee43c7fc1", 00:12:17.024 "is_configured": true, 00:12:17.024 "data_offset": 0, 00:12:17.024 "data_size": 65536 00:12:17.024 }, 00:12:17.024 { 00:12:17.024 "name": "BaseBdev4", 00:12:17.024 "uuid": "a50a5fd3-d7af-4d47-bfae-3db1c3c2b343", 00:12:17.024 "is_configured": true, 00:12:17.024 "data_offset": 0, 00:12:17.024 "data_size": 65536 00:12:17.024 } 00:12:17.024 ] 00:12:17.024 } 00:12:17.024 } 00:12:17.024 }' 00:12:17.024 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.283 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:17.284 BaseBdev2 00:12:17.284 BaseBdev3 00:12:17.284 BaseBdev4' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.284 [2024-11-17 01:32:25.729649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.284 [2024-11-17 01:32:25.729696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.284 [2024-11-17 01:32:25.729815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.284 [2024-11-17 01:32:25.730143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.284 [2024-11-17 01:32:25.730166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72949 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72949 ']' 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72949 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:17.284 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.544 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72949 00:12:17.544 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.544 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.544 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72949' 00:12:17.544 killing process with pid 72949 00:12:17.544 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72949 00:12:17.544 [2024-11-17 01:32:25.771173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.544 01:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72949 00:12:17.803 [2024-11-17 01:32:26.193613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.182 ************************************ 00:12:19.182 END TEST raid_state_function_test 00:12:19.182 ************************************ 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:19.182 00:12:19.182 real 0m11.584s 00:12:19.182 user 0m18.329s 00:12:19.182 sys 0m2.034s 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.182 01:32:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:19.182 01:32:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:19.182 01:32:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.182 01:32:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.182 ************************************ 00:12:19.182 START TEST raid_state_function_test_sb 00:12:19.182 ************************************ 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73620 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73620' 00:12:19.182 Process raid pid: 73620 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73620 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73620 ']' 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.182 01:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.183 [2024-11-17 01:32:27.564457] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:19.183 [2024-11-17 01:32:27.564663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.442 [2024-11-17 01:32:27.737929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.442 [2024-11-17 01:32:27.883432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.709 [2024-11-17 01:32:28.131936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.709 [2024-11-17 01:32:28.132100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.977 [2024-11-17 01:32:28.412396] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.977 [2024-11-17 01:32:28.412540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.977 [2024-11-17 01:32:28.412571] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.977 [2024-11-17 01:32:28.412595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.977 [2024-11-17 01:32:28.412614] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.977 [2024-11-17 01:32:28.412635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.977 [2024-11-17 01:32:28.412660] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:19.977 [2024-11-17 01:32:28.412701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.977 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.236 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.236 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.236 "name": "Existed_Raid", 00:12:20.236 "uuid": "e60c3b68-ceec-487c-b2bf-323a8556c0e7", 00:12:20.236 "strip_size_kb": 0, 00:12:20.236 "state": "configuring", 00:12:20.236 "raid_level": "raid1", 00:12:20.236 "superblock": true, 00:12:20.236 "num_base_bdevs": 4, 00:12:20.236 "num_base_bdevs_discovered": 0, 00:12:20.236 "num_base_bdevs_operational": 4, 00:12:20.236 "base_bdevs_list": [ 00:12:20.236 { 00:12:20.237 "name": "BaseBdev1", 00:12:20.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.237 "is_configured": false, 00:12:20.237 "data_offset": 0, 00:12:20.237 "data_size": 0 00:12:20.237 }, 00:12:20.237 { 00:12:20.237 "name": "BaseBdev2", 00:12:20.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.237 "is_configured": false, 00:12:20.237 "data_offset": 0, 00:12:20.237 "data_size": 0 00:12:20.237 }, 00:12:20.237 { 00:12:20.237 "name": "BaseBdev3", 00:12:20.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.237 "is_configured": false, 00:12:20.237 "data_offset": 0, 00:12:20.237 "data_size": 0 00:12:20.237 }, 00:12:20.237 { 00:12:20.237 "name": "BaseBdev4", 00:12:20.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.237 "is_configured": false, 00:12:20.237 "data_offset": 0, 00:12:20.237 "data_size": 0 00:12:20.237 } 00:12:20.237 ] 00:12:20.237 }' 00:12:20.237 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.237 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.496 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.496 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.496 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.496 [2024-11-17 01:32:28.915490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.496 [2024-11-17 01:32:28.915546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:20.496 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.496 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:20.496 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.497 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.497 [2024-11-17 01:32:28.927451] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.497 [2024-11-17 01:32:28.927500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.497 [2024-11-17 01:32:28.927510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.497 [2024-11-17 01:32:28.927521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.497 [2024-11-17 01:32:28.927527] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:20.497 [2024-11-17 01:32:28.927537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:20.497 [2024-11-17 01:32:28.927543] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:20.497 [2024-11-17 01:32:28.927552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:20.497 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.497 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:20.497 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.497 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.756 [2024-11-17 01:32:28.985193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.756 BaseBdev1 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.756 01:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.756 [ 00:12:20.756 { 00:12:20.756 "name": "BaseBdev1", 00:12:20.756 "aliases": [ 00:12:20.756 "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c" 00:12:20.756 ], 00:12:20.756 "product_name": "Malloc disk", 00:12:20.756 "block_size": 512, 00:12:20.756 "num_blocks": 65536, 00:12:20.756 "uuid": "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c", 00:12:20.756 "assigned_rate_limits": { 00:12:20.756 "rw_ios_per_sec": 0, 00:12:20.756 "rw_mbytes_per_sec": 0, 00:12:20.756 "r_mbytes_per_sec": 0, 00:12:20.756 "w_mbytes_per_sec": 0 00:12:20.756 }, 00:12:20.756 "claimed": true, 00:12:20.756 "claim_type": "exclusive_write", 00:12:20.756 "zoned": false, 00:12:20.756 "supported_io_types": { 00:12:20.756 "read": true, 00:12:20.756 "write": true, 00:12:20.756 "unmap": true, 00:12:20.756 "flush": true, 00:12:20.756 "reset": true, 00:12:20.756 "nvme_admin": false, 00:12:20.756 "nvme_io": false, 00:12:20.756 "nvme_io_md": false, 00:12:20.756 "write_zeroes": true, 00:12:20.756 "zcopy": true, 00:12:20.756 "get_zone_info": false, 00:12:20.756 "zone_management": false, 00:12:20.756 "zone_append": false, 00:12:20.756 "compare": false, 00:12:20.756 "compare_and_write": false, 00:12:20.756 "abort": true, 00:12:20.756 "seek_hole": false, 00:12:20.756 "seek_data": false, 00:12:20.756 "copy": true, 00:12:20.756 "nvme_iov_md": false 00:12:20.756 }, 00:12:20.756 "memory_domains": [ 00:12:20.756 { 00:12:20.756 "dma_device_id": "system", 00:12:20.756 "dma_device_type": 1 00:12:20.756 }, 00:12:20.756 { 00:12:20.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.756 "dma_device_type": 2 00:12:20.756 } 00:12:20.756 ], 00:12:20.756 "driver_specific": {} 00:12:20.756 } 00:12:20.756 ] 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.756 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.757 "name": "Existed_Raid", 00:12:20.757 "uuid": "3045622c-8959-4aa1-9df7-1467c78da85c", 00:12:20.757 "strip_size_kb": 0, 00:12:20.757 "state": "configuring", 00:12:20.757 "raid_level": "raid1", 00:12:20.757 "superblock": true, 00:12:20.757 "num_base_bdevs": 4, 00:12:20.757 "num_base_bdevs_discovered": 1, 00:12:20.757 "num_base_bdevs_operational": 4, 00:12:20.757 "base_bdevs_list": [ 00:12:20.757 { 00:12:20.757 "name": "BaseBdev1", 00:12:20.757 "uuid": "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c", 00:12:20.757 "is_configured": true, 00:12:20.757 "data_offset": 2048, 00:12:20.757 "data_size": 63488 00:12:20.757 }, 00:12:20.757 { 00:12:20.757 "name": "BaseBdev2", 00:12:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.757 "is_configured": false, 00:12:20.757 "data_offset": 0, 00:12:20.757 "data_size": 0 00:12:20.757 }, 00:12:20.757 { 00:12:20.757 "name": "BaseBdev3", 00:12:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.757 "is_configured": false, 00:12:20.757 "data_offset": 0, 00:12:20.757 "data_size": 0 00:12:20.757 }, 00:12:20.757 { 00:12:20.757 "name": "BaseBdev4", 00:12:20.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.757 "is_configured": false, 00:12:20.757 "data_offset": 0, 00:12:20.757 "data_size": 0 00:12:20.757 } 00:12:20.757 ] 00:12:20.757 }' 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.757 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.017 [2024-11-17 01:32:29.444481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.017 [2024-11-17 01:32:29.444549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.017 [2024-11-17 01:32:29.456500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.017 [2024-11-17 01:32:29.458502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.017 [2024-11-17 01:32:29.458618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.017 [2024-11-17 01:32:29.458633] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:21.017 [2024-11-17 01:32:29.458645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:21.017 [2024-11-17 01:32:29.458651] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:21.017 [2024-11-17 01:32:29.458660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.017 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.277 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.277 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.277 "name": "Existed_Raid", 00:12:21.277 "uuid": "c8b4a0ac-955a-432c-9019-1f526ecb6f51", 00:12:21.277 "strip_size_kb": 0, 00:12:21.277 "state": "configuring", 00:12:21.277 "raid_level": "raid1", 00:12:21.277 "superblock": true, 00:12:21.277 "num_base_bdevs": 4, 00:12:21.277 "num_base_bdevs_discovered": 1, 00:12:21.277 "num_base_bdevs_operational": 4, 00:12:21.277 "base_bdevs_list": [ 00:12:21.277 { 00:12:21.277 "name": "BaseBdev1", 00:12:21.277 "uuid": "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c", 00:12:21.277 "is_configured": true, 00:12:21.277 "data_offset": 2048, 00:12:21.277 "data_size": 63488 00:12:21.277 }, 00:12:21.277 { 00:12:21.277 "name": "BaseBdev2", 00:12:21.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.277 "is_configured": false, 00:12:21.277 "data_offset": 0, 00:12:21.277 "data_size": 0 00:12:21.277 }, 00:12:21.277 { 00:12:21.277 "name": "BaseBdev3", 00:12:21.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.277 "is_configured": false, 00:12:21.277 "data_offset": 0, 00:12:21.277 "data_size": 0 00:12:21.277 }, 00:12:21.277 { 00:12:21.277 "name": "BaseBdev4", 00:12:21.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.277 "is_configured": false, 00:12:21.277 "data_offset": 0, 00:12:21.277 "data_size": 0 00:12:21.277 } 00:12:21.277 ] 00:12:21.277 }' 00:12:21.277 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.277 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.537 [2024-11-17 01:32:29.920801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.537 BaseBdev2 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.537 [ 00:12:21.537 { 00:12:21.537 "name": "BaseBdev2", 00:12:21.537 "aliases": [ 00:12:21.537 "b4018b9a-67ef-4b10-a792-7c88ec4aeda6" 00:12:21.537 ], 00:12:21.537 "product_name": "Malloc disk", 00:12:21.537 "block_size": 512, 00:12:21.537 "num_blocks": 65536, 00:12:21.537 "uuid": "b4018b9a-67ef-4b10-a792-7c88ec4aeda6", 00:12:21.537 "assigned_rate_limits": { 00:12:21.537 "rw_ios_per_sec": 0, 00:12:21.537 "rw_mbytes_per_sec": 0, 00:12:21.537 "r_mbytes_per_sec": 0, 00:12:21.537 "w_mbytes_per_sec": 0 00:12:21.537 }, 00:12:21.537 "claimed": true, 00:12:21.537 "claim_type": "exclusive_write", 00:12:21.537 "zoned": false, 00:12:21.537 "supported_io_types": { 00:12:21.537 "read": true, 00:12:21.537 "write": true, 00:12:21.537 "unmap": true, 00:12:21.537 "flush": true, 00:12:21.537 "reset": true, 00:12:21.537 "nvme_admin": false, 00:12:21.537 "nvme_io": false, 00:12:21.537 "nvme_io_md": false, 00:12:21.537 "write_zeroes": true, 00:12:21.537 "zcopy": true, 00:12:21.537 "get_zone_info": false, 00:12:21.537 "zone_management": false, 00:12:21.537 "zone_append": false, 00:12:21.537 "compare": false, 00:12:21.537 "compare_and_write": false, 00:12:21.537 "abort": true, 00:12:21.537 "seek_hole": false, 00:12:21.537 "seek_data": false, 00:12:21.537 "copy": true, 00:12:21.537 "nvme_iov_md": false 00:12:21.537 }, 00:12:21.537 "memory_domains": [ 00:12:21.537 { 00:12:21.537 "dma_device_id": "system", 00:12:21.537 "dma_device_type": 1 00:12:21.537 }, 00:12:21.537 { 00:12:21.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.537 "dma_device_type": 2 00:12:21.537 } 00:12:21.537 ], 00:12:21.537 "driver_specific": {} 00:12:21.537 } 00:12:21.537 ] 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.537 01:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.798 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.798 "name": "Existed_Raid", 00:12:21.798 "uuid": "c8b4a0ac-955a-432c-9019-1f526ecb6f51", 00:12:21.798 "strip_size_kb": 0, 00:12:21.798 "state": "configuring", 00:12:21.798 "raid_level": "raid1", 00:12:21.798 "superblock": true, 00:12:21.798 "num_base_bdevs": 4, 00:12:21.798 "num_base_bdevs_discovered": 2, 00:12:21.798 "num_base_bdevs_operational": 4, 00:12:21.798 "base_bdevs_list": [ 00:12:21.798 { 00:12:21.798 "name": "BaseBdev1", 00:12:21.798 "uuid": "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c", 00:12:21.798 "is_configured": true, 00:12:21.798 "data_offset": 2048, 00:12:21.798 "data_size": 63488 00:12:21.798 }, 00:12:21.798 { 00:12:21.798 "name": "BaseBdev2", 00:12:21.798 "uuid": "b4018b9a-67ef-4b10-a792-7c88ec4aeda6", 00:12:21.798 "is_configured": true, 00:12:21.798 "data_offset": 2048, 00:12:21.798 "data_size": 63488 00:12:21.798 }, 00:12:21.798 { 00:12:21.798 "name": "BaseBdev3", 00:12:21.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.798 "is_configured": false, 00:12:21.798 "data_offset": 0, 00:12:21.798 "data_size": 0 00:12:21.798 }, 00:12:21.798 { 00:12:21.798 "name": "BaseBdev4", 00:12:21.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.798 "is_configured": false, 00:12:21.798 "data_offset": 0, 00:12:21.798 "data_size": 0 00:12:21.798 } 00:12:21.798 ] 00:12:21.798 }' 00:12:21.798 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.798 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.057 [2024-11-17 01:32:30.438234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.057 BaseBdev3 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.057 [ 00:12:22.057 { 00:12:22.057 "name": "BaseBdev3", 00:12:22.057 "aliases": [ 00:12:22.057 "5809dbcf-bd87-40d9-a9c5-d6c9f6980a1b" 00:12:22.057 ], 00:12:22.057 "product_name": "Malloc disk", 00:12:22.057 "block_size": 512, 00:12:22.057 "num_blocks": 65536, 00:12:22.057 "uuid": "5809dbcf-bd87-40d9-a9c5-d6c9f6980a1b", 00:12:22.057 "assigned_rate_limits": { 00:12:22.057 "rw_ios_per_sec": 0, 00:12:22.057 "rw_mbytes_per_sec": 0, 00:12:22.057 "r_mbytes_per_sec": 0, 00:12:22.057 "w_mbytes_per_sec": 0 00:12:22.057 }, 00:12:22.057 "claimed": true, 00:12:22.057 "claim_type": "exclusive_write", 00:12:22.057 "zoned": false, 00:12:22.057 "supported_io_types": { 00:12:22.057 "read": true, 00:12:22.057 "write": true, 00:12:22.057 "unmap": true, 00:12:22.057 "flush": true, 00:12:22.057 "reset": true, 00:12:22.057 "nvme_admin": false, 00:12:22.057 "nvme_io": false, 00:12:22.057 "nvme_io_md": false, 00:12:22.057 "write_zeroes": true, 00:12:22.057 "zcopy": true, 00:12:22.057 "get_zone_info": false, 00:12:22.057 "zone_management": false, 00:12:22.057 "zone_append": false, 00:12:22.057 "compare": false, 00:12:22.057 "compare_and_write": false, 00:12:22.057 "abort": true, 00:12:22.057 "seek_hole": false, 00:12:22.057 "seek_data": false, 00:12:22.057 "copy": true, 00:12:22.057 "nvme_iov_md": false 00:12:22.057 }, 00:12:22.057 "memory_domains": [ 00:12:22.057 { 00:12:22.057 "dma_device_id": "system", 00:12:22.057 "dma_device_type": 1 00:12:22.057 }, 00:12:22.057 { 00:12:22.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.057 "dma_device_type": 2 00:12:22.057 } 00:12:22.057 ], 00:12:22.057 "driver_specific": {} 00:12:22.057 } 00:12:22.057 ] 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.057 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.058 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.317 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.317 "name": "Existed_Raid", 00:12:22.317 "uuid": "c8b4a0ac-955a-432c-9019-1f526ecb6f51", 00:12:22.317 "strip_size_kb": 0, 00:12:22.317 "state": "configuring", 00:12:22.317 "raid_level": "raid1", 00:12:22.317 "superblock": true, 00:12:22.317 "num_base_bdevs": 4, 00:12:22.317 "num_base_bdevs_discovered": 3, 00:12:22.317 "num_base_bdevs_operational": 4, 00:12:22.317 "base_bdevs_list": [ 00:12:22.317 { 00:12:22.317 "name": "BaseBdev1", 00:12:22.317 "uuid": "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c", 00:12:22.317 "is_configured": true, 00:12:22.317 "data_offset": 2048, 00:12:22.317 "data_size": 63488 00:12:22.317 }, 00:12:22.317 { 00:12:22.317 "name": "BaseBdev2", 00:12:22.317 "uuid": "b4018b9a-67ef-4b10-a792-7c88ec4aeda6", 00:12:22.317 "is_configured": true, 00:12:22.317 "data_offset": 2048, 00:12:22.317 "data_size": 63488 00:12:22.317 }, 00:12:22.317 { 00:12:22.317 "name": "BaseBdev3", 00:12:22.317 "uuid": "5809dbcf-bd87-40d9-a9c5-d6c9f6980a1b", 00:12:22.317 "is_configured": true, 00:12:22.317 "data_offset": 2048, 00:12:22.317 "data_size": 63488 00:12:22.317 }, 00:12:22.317 { 00:12:22.317 "name": "BaseBdev4", 00:12:22.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.317 "is_configured": false, 00:12:22.317 "data_offset": 0, 00:12:22.317 "data_size": 0 00:12:22.317 } 00:12:22.317 ] 00:12:22.317 }' 00:12:22.317 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.317 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.576 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:22.576 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.576 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.576 [2024-11-17 01:32:30.962388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.576 [2024-11-17 01:32:30.962693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.576 [2024-11-17 01:32:30.962712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.576 [2024-11-17 01:32:30.963072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:22.576 BaseBdev4 00:12:22.576 [2024-11-17 01:32:30.963259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.577 [2024-11-17 01:32:30.963283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:22.577 [2024-11-17 01:32:30.963444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.577 [ 00:12:22.577 { 00:12:22.577 "name": "BaseBdev4", 00:12:22.577 "aliases": [ 00:12:22.577 "61d2dd9a-e1af-4088-a790-e586d3d856fc" 00:12:22.577 ], 00:12:22.577 "product_name": "Malloc disk", 00:12:22.577 "block_size": 512, 00:12:22.577 "num_blocks": 65536, 00:12:22.577 "uuid": "61d2dd9a-e1af-4088-a790-e586d3d856fc", 00:12:22.577 "assigned_rate_limits": { 00:12:22.577 "rw_ios_per_sec": 0, 00:12:22.577 "rw_mbytes_per_sec": 0, 00:12:22.577 "r_mbytes_per_sec": 0, 00:12:22.577 "w_mbytes_per_sec": 0 00:12:22.577 }, 00:12:22.577 "claimed": true, 00:12:22.577 "claim_type": "exclusive_write", 00:12:22.577 "zoned": false, 00:12:22.577 "supported_io_types": { 00:12:22.577 "read": true, 00:12:22.577 "write": true, 00:12:22.577 "unmap": true, 00:12:22.577 "flush": true, 00:12:22.577 "reset": true, 00:12:22.577 "nvme_admin": false, 00:12:22.577 "nvme_io": false, 00:12:22.577 "nvme_io_md": false, 00:12:22.577 "write_zeroes": true, 00:12:22.577 "zcopy": true, 00:12:22.577 "get_zone_info": false, 00:12:22.577 "zone_management": false, 00:12:22.577 "zone_append": false, 00:12:22.577 "compare": false, 00:12:22.577 "compare_and_write": false, 00:12:22.577 "abort": true, 00:12:22.577 "seek_hole": false, 00:12:22.577 "seek_data": false, 00:12:22.577 "copy": true, 00:12:22.577 "nvme_iov_md": false 00:12:22.577 }, 00:12:22.577 "memory_domains": [ 00:12:22.577 { 00:12:22.577 "dma_device_id": "system", 00:12:22.577 "dma_device_type": 1 00:12:22.577 }, 00:12:22.577 { 00:12:22.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.577 "dma_device_type": 2 00:12:22.577 } 00:12:22.577 ], 00:12:22.577 "driver_specific": {} 00:12:22.577 } 00:12:22.577 ] 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.577 01:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.577 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.836 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.836 "name": "Existed_Raid", 00:12:22.836 "uuid": "c8b4a0ac-955a-432c-9019-1f526ecb6f51", 00:12:22.836 "strip_size_kb": 0, 00:12:22.836 "state": "online", 00:12:22.836 "raid_level": "raid1", 00:12:22.836 "superblock": true, 00:12:22.836 "num_base_bdevs": 4, 00:12:22.836 "num_base_bdevs_discovered": 4, 00:12:22.836 "num_base_bdevs_operational": 4, 00:12:22.836 "base_bdevs_list": [ 00:12:22.836 { 00:12:22.836 "name": "BaseBdev1", 00:12:22.836 "uuid": "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c", 00:12:22.836 "is_configured": true, 00:12:22.836 "data_offset": 2048, 00:12:22.836 "data_size": 63488 00:12:22.836 }, 00:12:22.836 { 00:12:22.836 "name": "BaseBdev2", 00:12:22.836 "uuid": "b4018b9a-67ef-4b10-a792-7c88ec4aeda6", 00:12:22.836 "is_configured": true, 00:12:22.836 "data_offset": 2048, 00:12:22.836 "data_size": 63488 00:12:22.836 }, 00:12:22.836 { 00:12:22.836 "name": "BaseBdev3", 00:12:22.836 "uuid": "5809dbcf-bd87-40d9-a9c5-d6c9f6980a1b", 00:12:22.836 "is_configured": true, 00:12:22.836 "data_offset": 2048, 00:12:22.836 "data_size": 63488 00:12:22.836 }, 00:12:22.836 { 00:12:22.836 "name": "BaseBdev4", 00:12:22.836 "uuid": "61d2dd9a-e1af-4088-a790-e586d3d856fc", 00:12:22.836 "is_configured": true, 00:12:22.836 "data_offset": 2048, 00:12:22.836 "data_size": 63488 00:12:22.836 } 00:12:22.836 ] 00:12:22.836 }' 00:12:22.836 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.836 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.094 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:23.094 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:23.094 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.094 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.094 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.094 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.095 [2024-11-17 01:32:31.445985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.095 "name": "Existed_Raid", 00:12:23.095 "aliases": [ 00:12:23.095 "c8b4a0ac-955a-432c-9019-1f526ecb6f51" 00:12:23.095 ], 00:12:23.095 "product_name": "Raid Volume", 00:12:23.095 "block_size": 512, 00:12:23.095 "num_blocks": 63488, 00:12:23.095 "uuid": "c8b4a0ac-955a-432c-9019-1f526ecb6f51", 00:12:23.095 "assigned_rate_limits": { 00:12:23.095 "rw_ios_per_sec": 0, 00:12:23.095 "rw_mbytes_per_sec": 0, 00:12:23.095 "r_mbytes_per_sec": 0, 00:12:23.095 "w_mbytes_per_sec": 0 00:12:23.095 }, 00:12:23.095 "claimed": false, 00:12:23.095 "zoned": false, 00:12:23.095 "supported_io_types": { 00:12:23.095 "read": true, 00:12:23.095 "write": true, 00:12:23.095 "unmap": false, 00:12:23.095 "flush": false, 00:12:23.095 "reset": true, 00:12:23.095 "nvme_admin": false, 00:12:23.095 "nvme_io": false, 00:12:23.095 "nvme_io_md": false, 00:12:23.095 "write_zeroes": true, 00:12:23.095 "zcopy": false, 00:12:23.095 "get_zone_info": false, 00:12:23.095 "zone_management": false, 00:12:23.095 "zone_append": false, 00:12:23.095 "compare": false, 00:12:23.095 "compare_and_write": false, 00:12:23.095 "abort": false, 00:12:23.095 "seek_hole": false, 00:12:23.095 "seek_data": false, 00:12:23.095 "copy": false, 00:12:23.095 "nvme_iov_md": false 00:12:23.095 }, 00:12:23.095 "memory_domains": [ 00:12:23.095 { 00:12:23.095 "dma_device_id": "system", 00:12:23.095 "dma_device_type": 1 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.095 "dma_device_type": 2 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "dma_device_id": "system", 00:12:23.095 "dma_device_type": 1 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.095 "dma_device_type": 2 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "dma_device_id": "system", 00:12:23.095 "dma_device_type": 1 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.095 "dma_device_type": 2 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "dma_device_id": "system", 00:12:23.095 "dma_device_type": 1 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.095 "dma_device_type": 2 00:12:23.095 } 00:12:23.095 ], 00:12:23.095 "driver_specific": { 00:12:23.095 "raid": { 00:12:23.095 "uuid": "c8b4a0ac-955a-432c-9019-1f526ecb6f51", 00:12:23.095 "strip_size_kb": 0, 00:12:23.095 "state": "online", 00:12:23.095 "raid_level": "raid1", 00:12:23.095 "superblock": true, 00:12:23.095 "num_base_bdevs": 4, 00:12:23.095 "num_base_bdevs_discovered": 4, 00:12:23.095 "num_base_bdevs_operational": 4, 00:12:23.095 "base_bdevs_list": [ 00:12:23.095 { 00:12:23.095 "name": "BaseBdev1", 00:12:23.095 "uuid": "3adfd5ce-7b6c-43b9-b0a1-2f7af0b5af1c", 00:12:23.095 "is_configured": true, 00:12:23.095 "data_offset": 2048, 00:12:23.095 "data_size": 63488 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "name": "BaseBdev2", 00:12:23.095 "uuid": "b4018b9a-67ef-4b10-a792-7c88ec4aeda6", 00:12:23.095 "is_configured": true, 00:12:23.095 "data_offset": 2048, 00:12:23.095 "data_size": 63488 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "name": "BaseBdev3", 00:12:23.095 "uuid": "5809dbcf-bd87-40d9-a9c5-d6c9f6980a1b", 00:12:23.095 "is_configured": true, 00:12:23.095 "data_offset": 2048, 00:12:23.095 "data_size": 63488 00:12:23.095 }, 00:12:23.095 { 00:12:23.095 "name": "BaseBdev4", 00:12:23.095 "uuid": "61d2dd9a-e1af-4088-a790-e586d3d856fc", 00:12:23.095 "is_configured": true, 00:12:23.095 "data_offset": 2048, 00:12:23.095 "data_size": 63488 00:12:23.095 } 00:12:23.095 ] 00:12:23.095 } 00:12:23.095 } 00:12:23.095 }' 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:23.095 BaseBdev2 00:12:23.095 BaseBdev3 00:12:23.095 BaseBdev4' 00:12:23.095 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.353 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.353 [2024-11-17 01:32:31.757085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.612 "name": "Existed_Raid", 00:12:23.612 "uuid": "c8b4a0ac-955a-432c-9019-1f526ecb6f51", 00:12:23.612 "strip_size_kb": 0, 00:12:23.612 "state": "online", 00:12:23.612 "raid_level": "raid1", 00:12:23.612 "superblock": true, 00:12:23.612 "num_base_bdevs": 4, 00:12:23.612 "num_base_bdevs_discovered": 3, 00:12:23.612 "num_base_bdevs_operational": 3, 00:12:23.612 "base_bdevs_list": [ 00:12:23.612 { 00:12:23.612 "name": null, 00:12:23.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.612 "is_configured": false, 00:12:23.612 "data_offset": 0, 00:12:23.612 "data_size": 63488 00:12:23.612 }, 00:12:23.612 { 00:12:23.612 "name": "BaseBdev2", 00:12:23.612 "uuid": "b4018b9a-67ef-4b10-a792-7c88ec4aeda6", 00:12:23.612 "is_configured": true, 00:12:23.612 "data_offset": 2048, 00:12:23.612 "data_size": 63488 00:12:23.612 }, 00:12:23.612 { 00:12:23.612 "name": "BaseBdev3", 00:12:23.612 "uuid": "5809dbcf-bd87-40d9-a9c5-d6c9f6980a1b", 00:12:23.612 "is_configured": true, 00:12:23.612 "data_offset": 2048, 00:12:23.612 "data_size": 63488 00:12:23.612 }, 00:12:23.612 { 00:12:23.612 "name": "BaseBdev4", 00:12:23.612 "uuid": "61d2dd9a-e1af-4088-a790-e586d3d856fc", 00:12:23.612 "is_configured": true, 00:12:23.612 "data_offset": 2048, 00:12:23.612 "data_size": 63488 00:12:23.612 } 00:12:23.612 ] 00:12:23.612 }' 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.612 01:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.871 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:23.871 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.871 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.871 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.871 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.871 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.871 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.130 [2024-11-17 01:32:32.344541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.130 [2024-11-17 01:32:32.489940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.130 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.389 [2024-11-17 01:32:32.636921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:24.389 [2024-11-17 01:32:32.637021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.389 [2024-11-17 01:32:32.729691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.389 [2024-11-17 01:32:32.729749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.389 [2024-11-17 01:32:32.729782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.389 BaseBdev2 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.389 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.651 [ 00:12:24.651 { 00:12:24.651 "name": "BaseBdev2", 00:12:24.651 "aliases": [ 00:12:24.651 "695665b2-f4ae-4a32-b6cf-fab5552c754f" 00:12:24.651 ], 00:12:24.651 "product_name": "Malloc disk", 00:12:24.651 "block_size": 512, 00:12:24.651 "num_blocks": 65536, 00:12:24.651 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:24.651 "assigned_rate_limits": { 00:12:24.651 "rw_ios_per_sec": 0, 00:12:24.651 "rw_mbytes_per_sec": 0, 00:12:24.651 "r_mbytes_per_sec": 0, 00:12:24.651 "w_mbytes_per_sec": 0 00:12:24.651 }, 00:12:24.651 "claimed": false, 00:12:24.651 "zoned": false, 00:12:24.651 "supported_io_types": { 00:12:24.651 "read": true, 00:12:24.651 "write": true, 00:12:24.651 "unmap": true, 00:12:24.651 "flush": true, 00:12:24.651 "reset": true, 00:12:24.651 "nvme_admin": false, 00:12:24.651 "nvme_io": false, 00:12:24.651 "nvme_io_md": false, 00:12:24.651 "write_zeroes": true, 00:12:24.651 "zcopy": true, 00:12:24.651 "get_zone_info": false, 00:12:24.651 "zone_management": false, 00:12:24.651 "zone_append": false, 00:12:24.651 "compare": false, 00:12:24.651 "compare_and_write": false, 00:12:24.651 "abort": true, 00:12:24.651 "seek_hole": false, 00:12:24.651 "seek_data": false, 00:12:24.651 "copy": true, 00:12:24.651 "nvme_iov_md": false 00:12:24.651 }, 00:12:24.651 "memory_domains": [ 00:12:24.651 { 00:12:24.651 "dma_device_id": "system", 00:12:24.651 "dma_device_type": 1 00:12:24.651 }, 00:12:24.651 { 00:12:24.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.651 "dma_device_type": 2 00:12:24.651 } 00:12:24.652 ], 00:12:24.652 "driver_specific": {} 00:12:24.652 } 00:12:24.652 ] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.652 BaseBdev3 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.652 [ 00:12:24.652 { 00:12:24.652 "name": "BaseBdev3", 00:12:24.652 "aliases": [ 00:12:24.652 "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f" 00:12:24.652 ], 00:12:24.652 "product_name": "Malloc disk", 00:12:24.652 "block_size": 512, 00:12:24.652 "num_blocks": 65536, 00:12:24.652 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:24.652 "assigned_rate_limits": { 00:12:24.652 "rw_ios_per_sec": 0, 00:12:24.652 "rw_mbytes_per_sec": 0, 00:12:24.652 "r_mbytes_per_sec": 0, 00:12:24.652 "w_mbytes_per_sec": 0 00:12:24.652 }, 00:12:24.652 "claimed": false, 00:12:24.652 "zoned": false, 00:12:24.652 "supported_io_types": { 00:12:24.652 "read": true, 00:12:24.652 "write": true, 00:12:24.652 "unmap": true, 00:12:24.652 "flush": true, 00:12:24.652 "reset": true, 00:12:24.652 "nvme_admin": false, 00:12:24.652 "nvme_io": false, 00:12:24.652 "nvme_io_md": false, 00:12:24.652 "write_zeroes": true, 00:12:24.652 "zcopy": true, 00:12:24.652 "get_zone_info": false, 00:12:24.652 "zone_management": false, 00:12:24.652 "zone_append": false, 00:12:24.652 "compare": false, 00:12:24.652 "compare_and_write": false, 00:12:24.652 "abort": true, 00:12:24.652 "seek_hole": false, 00:12:24.652 "seek_data": false, 00:12:24.652 "copy": true, 00:12:24.652 "nvme_iov_md": false 00:12:24.652 }, 00:12:24.652 "memory_domains": [ 00:12:24.652 { 00:12:24.652 "dma_device_id": "system", 00:12:24.652 "dma_device_type": 1 00:12:24.652 }, 00:12:24.652 { 00:12:24.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.652 "dma_device_type": 2 00:12:24.652 } 00:12:24.652 ], 00:12:24.652 "driver_specific": {} 00:12:24.652 } 00:12:24.652 ] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.652 BaseBdev4 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.652 01:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.652 [ 00:12:24.652 { 00:12:24.652 "name": "BaseBdev4", 00:12:24.652 "aliases": [ 00:12:24.652 "48873a85-824a-4720-ac2a-c7d30ef7f010" 00:12:24.652 ], 00:12:24.652 "product_name": "Malloc disk", 00:12:24.652 "block_size": 512, 00:12:24.652 "num_blocks": 65536, 00:12:24.652 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:24.652 "assigned_rate_limits": { 00:12:24.652 "rw_ios_per_sec": 0, 00:12:24.652 "rw_mbytes_per_sec": 0, 00:12:24.652 "r_mbytes_per_sec": 0, 00:12:24.652 "w_mbytes_per_sec": 0 00:12:24.652 }, 00:12:24.652 "claimed": false, 00:12:24.652 "zoned": false, 00:12:24.652 "supported_io_types": { 00:12:24.652 "read": true, 00:12:24.652 "write": true, 00:12:24.652 "unmap": true, 00:12:24.652 "flush": true, 00:12:24.652 "reset": true, 00:12:24.652 "nvme_admin": false, 00:12:24.652 "nvme_io": false, 00:12:24.652 "nvme_io_md": false, 00:12:24.652 "write_zeroes": true, 00:12:24.652 "zcopy": true, 00:12:24.652 "get_zone_info": false, 00:12:24.652 "zone_management": false, 00:12:24.652 "zone_append": false, 00:12:24.652 "compare": false, 00:12:24.652 "compare_and_write": false, 00:12:24.652 "abort": true, 00:12:24.652 "seek_hole": false, 00:12:24.652 "seek_data": false, 00:12:24.652 "copy": true, 00:12:24.652 "nvme_iov_md": false 00:12:24.652 }, 00:12:24.652 "memory_domains": [ 00:12:24.652 { 00:12:24.652 "dma_device_id": "system", 00:12:24.652 "dma_device_type": 1 00:12:24.652 }, 00:12:24.652 { 00:12:24.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.652 "dma_device_type": 2 00:12:24.652 } 00:12:24.652 ], 00:12:24.652 "driver_specific": {} 00:12:24.652 } 00:12:24.652 ] 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.652 [2024-11-17 01:32:33.030932] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:24.652 [2024-11-17 01:32:33.031040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:24.652 [2024-11-17 01:32:33.031095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.652 [2024-11-17 01:32:33.032991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.652 [2024-11-17 01:32:33.033079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.652 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.653 "name": "Existed_Raid", 00:12:24.653 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:24.653 "strip_size_kb": 0, 00:12:24.653 "state": "configuring", 00:12:24.653 "raid_level": "raid1", 00:12:24.653 "superblock": true, 00:12:24.653 "num_base_bdevs": 4, 00:12:24.653 "num_base_bdevs_discovered": 3, 00:12:24.653 "num_base_bdevs_operational": 4, 00:12:24.653 "base_bdevs_list": [ 00:12:24.653 { 00:12:24.653 "name": "BaseBdev1", 00:12:24.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.653 "is_configured": false, 00:12:24.653 "data_offset": 0, 00:12:24.653 "data_size": 0 00:12:24.653 }, 00:12:24.653 { 00:12:24.653 "name": "BaseBdev2", 00:12:24.653 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:24.653 "is_configured": true, 00:12:24.653 "data_offset": 2048, 00:12:24.653 "data_size": 63488 00:12:24.653 }, 00:12:24.653 { 00:12:24.653 "name": "BaseBdev3", 00:12:24.653 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:24.653 "is_configured": true, 00:12:24.653 "data_offset": 2048, 00:12:24.653 "data_size": 63488 00:12:24.653 }, 00:12:24.653 { 00:12:24.653 "name": "BaseBdev4", 00:12:24.653 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:24.653 "is_configured": true, 00:12:24.653 "data_offset": 2048, 00:12:24.653 "data_size": 63488 00:12:24.653 } 00:12:24.653 ] 00:12:24.653 }' 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.653 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.220 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.221 [2024-11-17 01:32:33.414222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.221 "name": "Existed_Raid", 00:12:25.221 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:25.221 "strip_size_kb": 0, 00:12:25.221 "state": "configuring", 00:12:25.221 "raid_level": "raid1", 00:12:25.221 "superblock": true, 00:12:25.221 "num_base_bdevs": 4, 00:12:25.221 "num_base_bdevs_discovered": 2, 00:12:25.221 "num_base_bdevs_operational": 4, 00:12:25.221 "base_bdevs_list": [ 00:12:25.221 { 00:12:25.221 "name": "BaseBdev1", 00:12:25.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.221 "is_configured": false, 00:12:25.221 "data_offset": 0, 00:12:25.221 "data_size": 0 00:12:25.221 }, 00:12:25.221 { 00:12:25.221 "name": null, 00:12:25.221 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:25.221 "is_configured": false, 00:12:25.221 "data_offset": 0, 00:12:25.221 "data_size": 63488 00:12:25.221 }, 00:12:25.221 { 00:12:25.221 "name": "BaseBdev3", 00:12:25.221 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:25.221 "is_configured": true, 00:12:25.221 "data_offset": 2048, 00:12:25.221 "data_size": 63488 00:12:25.221 }, 00:12:25.221 { 00:12:25.221 "name": "BaseBdev4", 00:12:25.221 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:25.221 "is_configured": true, 00:12:25.221 "data_offset": 2048, 00:12:25.221 "data_size": 63488 00:12:25.221 } 00:12:25.221 ] 00:12:25.221 }' 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.221 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.480 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.739 [2024-11-17 01:32:33.964573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.739 BaseBdev1 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.739 01:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.739 [ 00:12:25.739 { 00:12:25.739 "name": "BaseBdev1", 00:12:25.739 "aliases": [ 00:12:25.739 "7125c4dc-408e-4ea4-b493-bf1164d696a8" 00:12:25.739 ], 00:12:25.739 "product_name": "Malloc disk", 00:12:25.739 "block_size": 512, 00:12:25.739 "num_blocks": 65536, 00:12:25.739 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:25.739 "assigned_rate_limits": { 00:12:25.739 "rw_ios_per_sec": 0, 00:12:25.739 "rw_mbytes_per_sec": 0, 00:12:25.739 "r_mbytes_per_sec": 0, 00:12:25.739 "w_mbytes_per_sec": 0 00:12:25.739 }, 00:12:25.739 "claimed": true, 00:12:25.739 "claim_type": "exclusive_write", 00:12:25.739 "zoned": false, 00:12:25.739 "supported_io_types": { 00:12:25.739 "read": true, 00:12:25.739 "write": true, 00:12:25.739 "unmap": true, 00:12:25.739 "flush": true, 00:12:25.739 "reset": true, 00:12:25.739 "nvme_admin": false, 00:12:25.739 "nvme_io": false, 00:12:25.739 "nvme_io_md": false, 00:12:25.739 "write_zeroes": true, 00:12:25.739 "zcopy": true, 00:12:25.739 "get_zone_info": false, 00:12:25.739 "zone_management": false, 00:12:25.739 "zone_append": false, 00:12:25.739 "compare": false, 00:12:25.739 "compare_and_write": false, 00:12:25.739 "abort": true, 00:12:25.739 "seek_hole": false, 00:12:25.739 "seek_data": false, 00:12:25.739 "copy": true, 00:12:25.739 "nvme_iov_md": false 00:12:25.739 }, 00:12:25.739 "memory_domains": [ 00:12:25.739 { 00:12:25.739 "dma_device_id": "system", 00:12:25.739 "dma_device_type": 1 00:12:25.739 }, 00:12:25.739 { 00:12:25.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.739 "dma_device_type": 2 00:12:25.739 } 00:12:25.739 ], 00:12:25.739 "driver_specific": {} 00:12:25.739 } 00:12:25.739 ] 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.739 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.739 "name": "Existed_Raid", 00:12:25.739 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:25.739 "strip_size_kb": 0, 00:12:25.739 "state": "configuring", 00:12:25.739 "raid_level": "raid1", 00:12:25.739 "superblock": true, 00:12:25.739 "num_base_bdevs": 4, 00:12:25.739 "num_base_bdevs_discovered": 3, 00:12:25.739 "num_base_bdevs_operational": 4, 00:12:25.739 "base_bdevs_list": [ 00:12:25.739 { 00:12:25.739 "name": "BaseBdev1", 00:12:25.739 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:25.739 "is_configured": true, 00:12:25.739 "data_offset": 2048, 00:12:25.739 "data_size": 63488 00:12:25.739 }, 00:12:25.739 { 00:12:25.740 "name": null, 00:12:25.740 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:25.740 "is_configured": false, 00:12:25.740 "data_offset": 0, 00:12:25.740 "data_size": 63488 00:12:25.740 }, 00:12:25.740 { 00:12:25.740 "name": "BaseBdev3", 00:12:25.740 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:25.740 "is_configured": true, 00:12:25.740 "data_offset": 2048, 00:12:25.740 "data_size": 63488 00:12:25.740 }, 00:12:25.740 { 00:12:25.740 "name": "BaseBdev4", 00:12:25.740 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:25.740 "is_configured": true, 00:12:25.740 "data_offset": 2048, 00:12:25.740 "data_size": 63488 00:12:25.740 } 00:12:25.740 ] 00:12:25.740 }' 00:12:25.740 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.740 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.999 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.257 [2024-11-17 01:32:34.459837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:26.257 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.257 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.257 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.257 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.257 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.257 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.258 "name": "Existed_Raid", 00:12:26.258 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:26.258 "strip_size_kb": 0, 00:12:26.258 "state": "configuring", 00:12:26.258 "raid_level": "raid1", 00:12:26.258 "superblock": true, 00:12:26.258 "num_base_bdevs": 4, 00:12:26.258 "num_base_bdevs_discovered": 2, 00:12:26.258 "num_base_bdevs_operational": 4, 00:12:26.258 "base_bdevs_list": [ 00:12:26.258 { 00:12:26.258 "name": "BaseBdev1", 00:12:26.258 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:26.258 "is_configured": true, 00:12:26.258 "data_offset": 2048, 00:12:26.258 "data_size": 63488 00:12:26.258 }, 00:12:26.258 { 00:12:26.258 "name": null, 00:12:26.258 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:26.258 "is_configured": false, 00:12:26.258 "data_offset": 0, 00:12:26.258 "data_size": 63488 00:12:26.258 }, 00:12:26.258 { 00:12:26.258 "name": null, 00:12:26.258 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:26.258 "is_configured": false, 00:12:26.258 "data_offset": 0, 00:12:26.258 "data_size": 63488 00:12:26.258 }, 00:12:26.258 { 00:12:26.258 "name": "BaseBdev4", 00:12:26.258 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:26.258 "is_configured": true, 00:12:26.258 "data_offset": 2048, 00:12:26.258 "data_size": 63488 00:12:26.258 } 00:12:26.258 ] 00:12:26.258 }' 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.258 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.516 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.516 [2024-11-17 01:32:34.935214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.517 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.775 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.775 "name": "Existed_Raid", 00:12:26.775 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:26.775 "strip_size_kb": 0, 00:12:26.775 "state": "configuring", 00:12:26.775 "raid_level": "raid1", 00:12:26.775 "superblock": true, 00:12:26.775 "num_base_bdevs": 4, 00:12:26.775 "num_base_bdevs_discovered": 3, 00:12:26.775 "num_base_bdevs_operational": 4, 00:12:26.775 "base_bdevs_list": [ 00:12:26.775 { 00:12:26.775 "name": "BaseBdev1", 00:12:26.775 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:26.775 "is_configured": true, 00:12:26.775 "data_offset": 2048, 00:12:26.775 "data_size": 63488 00:12:26.775 }, 00:12:26.775 { 00:12:26.775 "name": null, 00:12:26.775 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:26.775 "is_configured": false, 00:12:26.775 "data_offset": 0, 00:12:26.775 "data_size": 63488 00:12:26.775 }, 00:12:26.775 { 00:12:26.775 "name": "BaseBdev3", 00:12:26.775 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:26.775 "is_configured": true, 00:12:26.775 "data_offset": 2048, 00:12:26.775 "data_size": 63488 00:12:26.775 }, 00:12:26.775 { 00:12:26.775 "name": "BaseBdev4", 00:12:26.775 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:26.775 "is_configured": true, 00:12:26.775 "data_offset": 2048, 00:12:26.775 "data_size": 63488 00:12:26.775 } 00:12:26.775 ] 00:12:26.775 }' 00:12:26.775 01:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.775 01:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.034 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.034 [2024-11-17 01:32:35.442802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.292 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.292 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.293 "name": "Existed_Raid", 00:12:27.293 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:27.293 "strip_size_kb": 0, 00:12:27.293 "state": "configuring", 00:12:27.293 "raid_level": "raid1", 00:12:27.293 "superblock": true, 00:12:27.293 "num_base_bdevs": 4, 00:12:27.293 "num_base_bdevs_discovered": 2, 00:12:27.293 "num_base_bdevs_operational": 4, 00:12:27.293 "base_bdevs_list": [ 00:12:27.293 { 00:12:27.293 "name": null, 00:12:27.293 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:27.293 "is_configured": false, 00:12:27.293 "data_offset": 0, 00:12:27.293 "data_size": 63488 00:12:27.293 }, 00:12:27.293 { 00:12:27.293 "name": null, 00:12:27.293 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:27.293 "is_configured": false, 00:12:27.293 "data_offset": 0, 00:12:27.293 "data_size": 63488 00:12:27.293 }, 00:12:27.293 { 00:12:27.293 "name": "BaseBdev3", 00:12:27.293 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:27.293 "is_configured": true, 00:12:27.293 "data_offset": 2048, 00:12:27.293 "data_size": 63488 00:12:27.293 }, 00:12:27.293 { 00:12:27.293 "name": "BaseBdev4", 00:12:27.293 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:27.293 "is_configured": true, 00:12:27.293 "data_offset": 2048, 00:12:27.293 "data_size": 63488 00:12:27.293 } 00:12:27.293 ] 00:12:27.293 }' 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.293 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.551 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.552 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.552 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.552 01:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:27.552 01:32:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.810 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:27.810 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:27.810 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 [2024-11-17 01:32:36.036020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.811 "name": "Existed_Raid", 00:12:27.811 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:27.811 "strip_size_kb": 0, 00:12:27.811 "state": "configuring", 00:12:27.811 "raid_level": "raid1", 00:12:27.811 "superblock": true, 00:12:27.811 "num_base_bdevs": 4, 00:12:27.811 "num_base_bdevs_discovered": 3, 00:12:27.811 "num_base_bdevs_operational": 4, 00:12:27.811 "base_bdevs_list": [ 00:12:27.811 { 00:12:27.811 "name": null, 00:12:27.811 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:27.811 "is_configured": false, 00:12:27.811 "data_offset": 0, 00:12:27.811 "data_size": 63488 00:12:27.811 }, 00:12:27.811 { 00:12:27.811 "name": "BaseBdev2", 00:12:27.811 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:27.811 "is_configured": true, 00:12:27.811 "data_offset": 2048, 00:12:27.811 "data_size": 63488 00:12:27.811 }, 00:12:27.811 { 00:12:27.811 "name": "BaseBdev3", 00:12:27.811 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:27.811 "is_configured": true, 00:12:27.811 "data_offset": 2048, 00:12:27.811 "data_size": 63488 00:12:27.811 }, 00:12:27.811 { 00:12:27.811 "name": "BaseBdev4", 00:12:27.811 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:27.811 "is_configured": true, 00:12:27.811 "data_offset": 2048, 00:12:27.811 "data_size": 63488 00:12:27.811 } 00:12:27.811 ] 00:12:27.811 }' 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.811 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7125c4dc-408e-4ea4-b493-bf1164d696a8 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.072 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.340 [2024-11-17 01:32:36.567551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:28.340 [2024-11-17 01:32:36.567923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:28.340 [2024-11-17 01:32:36.567949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:28.340 [2024-11-17 01:32:36.568247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:28.340 [2024-11-17 01:32:36.568406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:28.340 [2024-11-17 01:32:36.568417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:28.340 NewBaseBdev 00:12:28.340 [2024-11-17 01:32:36.568563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.340 [ 00:12:28.340 { 00:12:28.340 "name": "NewBaseBdev", 00:12:28.340 "aliases": [ 00:12:28.340 "7125c4dc-408e-4ea4-b493-bf1164d696a8" 00:12:28.340 ], 00:12:28.340 "product_name": "Malloc disk", 00:12:28.340 "block_size": 512, 00:12:28.340 "num_blocks": 65536, 00:12:28.340 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:28.340 "assigned_rate_limits": { 00:12:28.340 "rw_ios_per_sec": 0, 00:12:28.340 "rw_mbytes_per_sec": 0, 00:12:28.340 "r_mbytes_per_sec": 0, 00:12:28.340 "w_mbytes_per_sec": 0 00:12:28.340 }, 00:12:28.340 "claimed": true, 00:12:28.340 "claim_type": "exclusive_write", 00:12:28.340 "zoned": false, 00:12:28.340 "supported_io_types": { 00:12:28.340 "read": true, 00:12:28.340 "write": true, 00:12:28.340 "unmap": true, 00:12:28.340 "flush": true, 00:12:28.340 "reset": true, 00:12:28.340 "nvme_admin": false, 00:12:28.340 "nvme_io": false, 00:12:28.340 "nvme_io_md": false, 00:12:28.340 "write_zeroes": true, 00:12:28.340 "zcopy": true, 00:12:28.340 "get_zone_info": false, 00:12:28.340 "zone_management": false, 00:12:28.340 "zone_append": false, 00:12:28.340 "compare": false, 00:12:28.340 "compare_and_write": false, 00:12:28.340 "abort": true, 00:12:28.340 "seek_hole": false, 00:12:28.340 "seek_data": false, 00:12:28.340 "copy": true, 00:12:28.340 "nvme_iov_md": false 00:12:28.340 }, 00:12:28.340 "memory_domains": [ 00:12:28.340 { 00:12:28.340 "dma_device_id": "system", 00:12:28.340 "dma_device_type": 1 00:12:28.340 }, 00:12:28.340 { 00:12:28.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.340 "dma_device_type": 2 00:12:28.340 } 00:12:28.340 ], 00:12:28.340 "driver_specific": {} 00:12:28.340 } 00:12:28.340 ] 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.340 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.341 "name": "Existed_Raid", 00:12:28.341 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:28.341 "strip_size_kb": 0, 00:12:28.341 "state": "online", 00:12:28.341 "raid_level": "raid1", 00:12:28.341 "superblock": true, 00:12:28.341 "num_base_bdevs": 4, 00:12:28.341 "num_base_bdevs_discovered": 4, 00:12:28.341 "num_base_bdevs_operational": 4, 00:12:28.341 "base_bdevs_list": [ 00:12:28.341 { 00:12:28.341 "name": "NewBaseBdev", 00:12:28.341 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:28.341 "is_configured": true, 00:12:28.341 "data_offset": 2048, 00:12:28.341 "data_size": 63488 00:12:28.341 }, 00:12:28.341 { 00:12:28.341 "name": "BaseBdev2", 00:12:28.341 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:28.341 "is_configured": true, 00:12:28.341 "data_offset": 2048, 00:12:28.341 "data_size": 63488 00:12:28.341 }, 00:12:28.341 { 00:12:28.341 "name": "BaseBdev3", 00:12:28.341 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:28.341 "is_configured": true, 00:12:28.341 "data_offset": 2048, 00:12:28.341 "data_size": 63488 00:12:28.341 }, 00:12:28.341 { 00:12:28.341 "name": "BaseBdev4", 00:12:28.341 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:28.341 "is_configured": true, 00:12:28.341 "data_offset": 2048, 00:12:28.341 "data_size": 63488 00:12:28.341 } 00:12:28.341 ] 00:12:28.341 }' 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.341 01:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.619 [2024-11-17 01:32:37.047293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.619 "name": "Existed_Raid", 00:12:28.619 "aliases": [ 00:12:28.619 "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c" 00:12:28.619 ], 00:12:28.619 "product_name": "Raid Volume", 00:12:28.619 "block_size": 512, 00:12:28.619 "num_blocks": 63488, 00:12:28.619 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:28.619 "assigned_rate_limits": { 00:12:28.619 "rw_ios_per_sec": 0, 00:12:28.619 "rw_mbytes_per_sec": 0, 00:12:28.619 "r_mbytes_per_sec": 0, 00:12:28.619 "w_mbytes_per_sec": 0 00:12:28.619 }, 00:12:28.619 "claimed": false, 00:12:28.619 "zoned": false, 00:12:28.619 "supported_io_types": { 00:12:28.619 "read": true, 00:12:28.619 "write": true, 00:12:28.619 "unmap": false, 00:12:28.619 "flush": false, 00:12:28.619 "reset": true, 00:12:28.619 "nvme_admin": false, 00:12:28.619 "nvme_io": false, 00:12:28.619 "nvme_io_md": false, 00:12:28.619 "write_zeroes": true, 00:12:28.619 "zcopy": false, 00:12:28.619 "get_zone_info": false, 00:12:28.619 "zone_management": false, 00:12:28.619 "zone_append": false, 00:12:28.619 "compare": false, 00:12:28.619 "compare_and_write": false, 00:12:28.619 "abort": false, 00:12:28.619 "seek_hole": false, 00:12:28.619 "seek_data": false, 00:12:28.619 "copy": false, 00:12:28.619 "nvme_iov_md": false 00:12:28.619 }, 00:12:28.619 "memory_domains": [ 00:12:28.619 { 00:12:28.619 "dma_device_id": "system", 00:12:28.619 "dma_device_type": 1 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.619 "dma_device_type": 2 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "dma_device_id": "system", 00:12:28.619 "dma_device_type": 1 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.619 "dma_device_type": 2 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "dma_device_id": "system", 00:12:28.619 "dma_device_type": 1 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.619 "dma_device_type": 2 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "dma_device_id": "system", 00:12:28.619 "dma_device_type": 1 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.619 "dma_device_type": 2 00:12:28.619 } 00:12:28.619 ], 00:12:28.619 "driver_specific": { 00:12:28.619 "raid": { 00:12:28.619 "uuid": "0455dfc5-22f1-4e6f-853b-9213a9e9fc8c", 00:12:28.619 "strip_size_kb": 0, 00:12:28.619 "state": "online", 00:12:28.619 "raid_level": "raid1", 00:12:28.619 "superblock": true, 00:12:28.619 "num_base_bdevs": 4, 00:12:28.619 "num_base_bdevs_discovered": 4, 00:12:28.619 "num_base_bdevs_operational": 4, 00:12:28.619 "base_bdevs_list": [ 00:12:28.619 { 00:12:28.619 "name": "NewBaseBdev", 00:12:28.619 "uuid": "7125c4dc-408e-4ea4-b493-bf1164d696a8", 00:12:28.619 "is_configured": true, 00:12:28.619 "data_offset": 2048, 00:12:28.619 "data_size": 63488 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "name": "BaseBdev2", 00:12:28.619 "uuid": "695665b2-f4ae-4a32-b6cf-fab5552c754f", 00:12:28.619 "is_configured": true, 00:12:28.619 "data_offset": 2048, 00:12:28.619 "data_size": 63488 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "name": "BaseBdev3", 00:12:28.619 "uuid": "724cfa7b-19e7-4fe5-b77e-c8f0aa717c6f", 00:12:28.619 "is_configured": true, 00:12:28.619 "data_offset": 2048, 00:12:28.619 "data_size": 63488 00:12:28.619 }, 00:12:28.619 { 00:12:28.619 "name": "BaseBdev4", 00:12:28.619 "uuid": "48873a85-824a-4720-ac2a-c7d30ef7f010", 00:12:28.619 "is_configured": true, 00:12:28.619 "data_offset": 2048, 00:12:28.619 "data_size": 63488 00:12:28.619 } 00:12:28.619 ] 00:12:28.619 } 00:12:28.619 } 00:12:28.619 }' 00:12:28.619 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:28.878 BaseBdev2 00:12:28.878 BaseBdev3 00:12:28.878 BaseBdev4' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:28.878 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.879 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.138 [2024-11-17 01:32:37.374378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.138 [2024-11-17 01:32:37.374468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.138 [2024-11-17 01:32:37.374561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.138 [2024-11-17 01:32:37.374887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.138 [2024-11-17 01:32:37.374903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73620 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73620 ']' 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73620 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73620 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73620' 00:12:29.138 killing process with pid 73620 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73620 00:12:29.138 [2024-11-17 01:32:37.416039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.138 01:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73620 00:12:29.397 [2024-11-17 01:32:37.843891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.774 01:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:30.774 00:12:30.774 real 0m11.561s 00:12:30.774 user 0m18.143s 00:12:30.774 sys 0m2.117s 00:12:30.774 01:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.774 ************************************ 00:12:30.774 END TEST raid_state_function_test_sb 00:12:30.774 ************************************ 00:12:30.774 01:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.774 01:32:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:30.774 01:32:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:30.774 01:32:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.774 01:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.774 ************************************ 00:12:30.774 START TEST raid_superblock_test 00:12:30.774 ************************************ 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74285 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74285 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74285 ']' 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.774 01:32:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.774 [2024-11-17 01:32:39.195412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:30.774 [2024-11-17 01:32:39.195538] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74285 ] 00:12:31.032 [2024-11-17 01:32:39.378382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.290 [2024-11-17 01:32:39.513620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.290 [2024-11-17 01:32:39.747564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.290 [2024-11-17 01:32:39.747621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 malloc1 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 [2024-11-17 01:32:40.079053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.857 [2024-11-17 01:32:40.079227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.857 [2024-11-17 01:32:40.079271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:31.857 [2024-11-17 01:32:40.079301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.857 [2024-11-17 01:32:40.081666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.857 [2024-11-17 01:32:40.081733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.857 pt1 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 malloc2 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 [2024-11-17 01:32:40.144283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.857 [2024-11-17 01:32:40.144353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.857 [2024-11-17 01:32:40.144375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:31.857 [2024-11-17 01:32:40.144384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.857 [2024-11-17 01:32:40.146808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.857 [2024-11-17 01:32:40.146896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.857 pt2 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 malloc3 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.857 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 [2024-11-17 01:32:40.217267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:31.857 [2024-11-17 01:32:40.217380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.857 [2024-11-17 01:32:40.217420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:31.857 [2024-11-17 01:32:40.217449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.857 [2024-11-17 01:32:40.219842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.857 [2024-11-17 01:32:40.219910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:31.857 pt3 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.858 malloc4 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.858 [2024-11-17 01:32:40.281317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:31.858 [2024-11-17 01:32:40.281431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.858 [2024-11-17 01:32:40.281466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:31.858 [2024-11-17 01:32:40.281492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.858 [2024-11-17 01:32:40.283961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.858 [2024-11-17 01:32:40.284031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:31.858 pt4 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.858 [2024-11-17 01:32:40.293328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.858 [2024-11-17 01:32:40.295356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.858 [2024-11-17 01:32:40.295467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.858 [2024-11-17 01:32:40.295512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:31.858 [2024-11-17 01:32:40.295704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:31.858 [2024-11-17 01:32:40.295722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.858 [2024-11-17 01:32:40.296004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.858 [2024-11-17 01:32:40.296178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:31.858 [2024-11-17 01:32:40.296194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:31.858 [2024-11-17 01:32:40.296339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.858 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.117 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.117 "name": "raid_bdev1", 00:12:32.117 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:32.117 "strip_size_kb": 0, 00:12:32.117 "state": "online", 00:12:32.117 "raid_level": "raid1", 00:12:32.117 "superblock": true, 00:12:32.117 "num_base_bdevs": 4, 00:12:32.117 "num_base_bdevs_discovered": 4, 00:12:32.117 "num_base_bdevs_operational": 4, 00:12:32.117 "base_bdevs_list": [ 00:12:32.117 { 00:12:32.117 "name": "pt1", 00:12:32.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.117 "is_configured": true, 00:12:32.117 "data_offset": 2048, 00:12:32.117 "data_size": 63488 00:12:32.117 }, 00:12:32.117 { 00:12:32.117 "name": "pt2", 00:12:32.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.117 "is_configured": true, 00:12:32.117 "data_offset": 2048, 00:12:32.117 "data_size": 63488 00:12:32.117 }, 00:12:32.117 { 00:12:32.117 "name": "pt3", 00:12:32.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.117 "is_configured": true, 00:12:32.117 "data_offset": 2048, 00:12:32.117 "data_size": 63488 00:12:32.117 }, 00:12:32.117 { 00:12:32.117 "name": "pt4", 00:12:32.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.117 "is_configured": true, 00:12:32.117 "data_offset": 2048, 00:12:32.117 "data_size": 63488 00:12:32.117 } 00:12:32.117 ] 00:12:32.117 }' 00:12:32.117 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.117 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:32.376 [2024-11-17 01:32:40.712934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.376 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:32.376 "name": "raid_bdev1", 00:12:32.376 "aliases": [ 00:12:32.376 "7dd13523-a4c1-4c2c-948f-e2a206dcaa81" 00:12:32.376 ], 00:12:32.376 "product_name": "Raid Volume", 00:12:32.376 "block_size": 512, 00:12:32.376 "num_blocks": 63488, 00:12:32.376 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:32.376 "assigned_rate_limits": { 00:12:32.376 "rw_ios_per_sec": 0, 00:12:32.376 "rw_mbytes_per_sec": 0, 00:12:32.376 "r_mbytes_per_sec": 0, 00:12:32.376 "w_mbytes_per_sec": 0 00:12:32.376 }, 00:12:32.376 "claimed": false, 00:12:32.376 "zoned": false, 00:12:32.376 "supported_io_types": { 00:12:32.376 "read": true, 00:12:32.376 "write": true, 00:12:32.376 "unmap": false, 00:12:32.376 "flush": false, 00:12:32.376 "reset": true, 00:12:32.376 "nvme_admin": false, 00:12:32.376 "nvme_io": false, 00:12:32.376 "nvme_io_md": false, 00:12:32.376 "write_zeroes": true, 00:12:32.376 "zcopy": false, 00:12:32.376 "get_zone_info": false, 00:12:32.376 "zone_management": false, 00:12:32.376 "zone_append": false, 00:12:32.376 "compare": false, 00:12:32.376 "compare_and_write": false, 00:12:32.376 "abort": false, 00:12:32.376 "seek_hole": false, 00:12:32.376 "seek_data": false, 00:12:32.376 "copy": false, 00:12:32.376 "nvme_iov_md": false 00:12:32.376 }, 00:12:32.376 "memory_domains": [ 00:12:32.376 { 00:12:32.376 "dma_device_id": "system", 00:12:32.376 "dma_device_type": 1 00:12:32.376 }, 00:12:32.376 { 00:12:32.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.376 "dma_device_type": 2 00:12:32.376 }, 00:12:32.376 { 00:12:32.376 "dma_device_id": "system", 00:12:32.376 "dma_device_type": 1 00:12:32.376 }, 00:12:32.376 { 00:12:32.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.376 "dma_device_type": 2 00:12:32.376 }, 00:12:32.376 { 00:12:32.376 "dma_device_id": "system", 00:12:32.376 "dma_device_type": 1 00:12:32.376 }, 00:12:32.376 { 00:12:32.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.376 "dma_device_type": 2 00:12:32.376 }, 00:12:32.376 { 00:12:32.376 "dma_device_id": "system", 00:12:32.376 "dma_device_type": 1 00:12:32.376 }, 00:12:32.376 { 00:12:32.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.376 "dma_device_type": 2 00:12:32.376 } 00:12:32.376 ], 00:12:32.376 "driver_specific": { 00:12:32.376 "raid": { 00:12:32.376 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:32.376 "strip_size_kb": 0, 00:12:32.376 "state": "online", 00:12:32.376 "raid_level": "raid1", 00:12:32.376 "superblock": true, 00:12:32.376 "num_base_bdevs": 4, 00:12:32.376 "num_base_bdevs_discovered": 4, 00:12:32.376 "num_base_bdevs_operational": 4, 00:12:32.376 "base_bdevs_list": [ 00:12:32.376 { 00:12:32.376 "name": "pt1", 00:12:32.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.377 "is_configured": true, 00:12:32.377 "data_offset": 2048, 00:12:32.377 "data_size": 63488 00:12:32.377 }, 00:12:32.377 { 00:12:32.377 "name": "pt2", 00:12:32.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.377 "is_configured": true, 00:12:32.377 "data_offset": 2048, 00:12:32.377 "data_size": 63488 00:12:32.377 }, 00:12:32.377 { 00:12:32.377 "name": "pt3", 00:12:32.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.377 "is_configured": true, 00:12:32.377 "data_offset": 2048, 00:12:32.377 "data_size": 63488 00:12:32.377 }, 00:12:32.377 { 00:12:32.377 "name": "pt4", 00:12:32.377 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:32.377 "is_configured": true, 00:12:32.377 "data_offset": 2048, 00:12:32.377 "data_size": 63488 00:12:32.377 } 00:12:32.377 ] 00:12:32.377 } 00:12:32.377 } 00:12:32.377 }' 00:12:32.377 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.377 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:32.377 pt2 00:12:32.377 pt3 00:12:32.377 pt4' 00:12:32.377 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.636 01:32:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:32.636 [2024-11-17 01:32:41.040259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7dd13523-a4c1-4c2c-948f-e2a206dcaa81 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7dd13523-a4c1-4c2c-948f-e2a206dcaa81 ']' 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 [2024-11-17 01:32:41.083923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.636 [2024-11-17 01:32:41.083956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.636 [2024-11-17 01:32:41.084036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.636 [2024-11-17 01:32:41.084122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.636 [2024-11-17 01:32:41.084144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.636 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.895 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 [2024-11-17 01:32:41.231687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:32.895 [2024-11-17 01:32:41.233832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:32.895 [2024-11-17 01:32:41.233886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:32.895 [2024-11-17 01:32:41.233918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:32.895 [2024-11-17 01:32:41.233970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:32.895 [2024-11-17 01:32:41.234027] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:32.895 [2024-11-17 01:32:41.234045] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:32.895 [2024-11-17 01:32:41.234064] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:32.895 [2024-11-17 01:32:41.234077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.895 [2024-11-17 01:32:41.234088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:32.895 request: 00:12:32.895 { 00:12:32.895 "name": "raid_bdev1", 00:12:32.895 "raid_level": "raid1", 00:12:32.895 "base_bdevs": [ 00:12:32.895 "malloc1", 00:12:32.895 "malloc2", 00:12:32.895 "malloc3", 00:12:32.895 "malloc4" 00:12:32.895 ], 00:12:32.895 "superblock": false, 00:12:32.895 "method": "bdev_raid_create", 00:12:32.895 "req_id": 1 00:12:32.895 } 00:12:32.895 Got JSON-RPC error response 00:12:32.895 response: 00:12:32.895 { 00:12:32.895 "code": -17, 00:12:32.896 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:32.896 } 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.896 [2024-11-17 01:32:41.299545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:32.896 [2024-11-17 01:32:41.299590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.896 [2024-11-17 01:32:41.299605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:32.896 [2024-11-17 01:32:41.299617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.896 [2024-11-17 01:32:41.301950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.896 [2024-11-17 01:32:41.301986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:32.896 [2024-11-17 01:32:41.302054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:32.896 [2024-11-17 01:32:41.302107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:32.896 pt1 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.896 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.154 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.154 "name": "raid_bdev1", 00:12:33.154 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:33.154 "strip_size_kb": 0, 00:12:33.154 "state": "configuring", 00:12:33.154 "raid_level": "raid1", 00:12:33.154 "superblock": true, 00:12:33.154 "num_base_bdevs": 4, 00:12:33.154 "num_base_bdevs_discovered": 1, 00:12:33.154 "num_base_bdevs_operational": 4, 00:12:33.155 "base_bdevs_list": [ 00:12:33.155 { 00:12:33.155 "name": "pt1", 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.155 "is_configured": true, 00:12:33.155 "data_offset": 2048, 00:12:33.155 "data_size": 63488 00:12:33.155 }, 00:12:33.155 { 00:12:33.155 "name": null, 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.155 "is_configured": false, 00:12:33.155 "data_offset": 2048, 00:12:33.155 "data_size": 63488 00:12:33.155 }, 00:12:33.155 { 00:12:33.155 "name": null, 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.155 "is_configured": false, 00:12:33.155 "data_offset": 2048, 00:12:33.155 "data_size": 63488 00:12:33.155 }, 00:12:33.155 { 00:12:33.155 "name": null, 00:12:33.155 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.155 "is_configured": false, 00:12:33.155 "data_offset": 2048, 00:12:33.155 "data_size": 63488 00:12:33.155 } 00:12:33.155 ] 00:12:33.155 }' 00:12:33.155 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.155 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.413 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:33.413 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.413 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.413 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.413 [2024-11-17 01:32:41.774878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.413 [2024-11-17 01:32:41.774953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.413 [2024-11-17 01:32:41.774974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:33.413 [2024-11-17 01:32:41.774985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.413 [2024-11-17 01:32:41.775483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.413 [2024-11-17 01:32:41.775510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.413 [2024-11-17 01:32:41.775595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:33.414 [2024-11-17 01:32:41.775638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.414 pt2 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.414 [2024-11-17 01:32:41.786847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.414 "name": "raid_bdev1", 00:12:33.414 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:33.414 "strip_size_kb": 0, 00:12:33.414 "state": "configuring", 00:12:33.414 "raid_level": "raid1", 00:12:33.414 "superblock": true, 00:12:33.414 "num_base_bdevs": 4, 00:12:33.414 "num_base_bdevs_discovered": 1, 00:12:33.414 "num_base_bdevs_operational": 4, 00:12:33.414 "base_bdevs_list": [ 00:12:33.414 { 00:12:33.414 "name": "pt1", 00:12:33.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.414 "is_configured": true, 00:12:33.414 "data_offset": 2048, 00:12:33.414 "data_size": 63488 00:12:33.414 }, 00:12:33.414 { 00:12:33.414 "name": null, 00:12:33.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.414 "is_configured": false, 00:12:33.414 "data_offset": 0, 00:12:33.414 "data_size": 63488 00:12:33.414 }, 00:12:33.414 { 00:12:33.414 "name": null, 00:12:33.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.414 "is_configured": false, 00:12:33.414 "data_offset": 2048, 00:12:33.414 "data_size": 63488 00:12:33.414 }, 00:12:33.414 { 00:12:33.414 "name": null, 00:12:33.414 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.414 "is_configured": false, 00:12:33.414 "data_offset": 2048, 00:12:33.414 "data_size": 63488 00:12:33.414 } 00:12:33.414 ] 00:12:33.414 }' 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.414 01:32:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.983 [2024-11-17 01:32:42.206098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:33.983 [2024-11-17 01:32:42.206168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.983 [2024-11-17 01:32:42.206195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:33.983 [2024-11-17 01:32:42.206206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.983 [2024-11-17 01:32:42.206700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.983 [2024-11-17 01:32:42.206723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:33.983 [2024-11-17 01:32:42.206828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:33.983 [2024-11-17 01:32:42.206861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.983 pt2 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.983 [2024-11-17 01:32:42.218055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:33.983 [2024-11-17 01:32:42.218107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.983 [2024-11-17 01:32:42.218127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:33.983 [2024-11-17 01:32:42.218137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.983 [2024-11-17 01:32:42.218546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.983 [2024-11-17 01:32:42.218568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:33.983 [2024-11-17 01:32:42.218643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:33.983 [2024-11-17 01:32:42.218663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:33.983 pt3 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:33.983 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.984 [2024-11-17 01:32:42.229987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:33.984 [2024-11-17 01:32:42.230026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.984 [2024-11-17 01:32:42.230041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:33.984 [2024-11-17 01:32:42.230048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.984 [2024-11-17 01:32:42.230400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.984 [2024-11-17 01:32:42.230421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:33.984 [2024-11-17 01:32:42.230477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:33.984 [2024-11-17 01:32:42.230494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:33.984 [2024-11-17 01:32:42.230624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.984 [2024-11-17 01:32:42.230644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.984 [2024-11-17 01:32:42.230898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:33.984 [2024-11-17 01:32:42.231063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.984 [2024-11-17 01:32:42.231090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:33.984 [2024-11-17 01:32:42.231250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.984 pt4 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.984 "name": "raid_bdev1", 00:12:33.984 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:33.984 "strip_size_kb": 0, 00:12:33.984 "state": "online", 00:12:33.984 "raid_level": "raid1", 00:12:33.984 "superblock": true, 00:12:33.984 "num_base_bdevs": 4, 00:12:33.984 "num_base_bdevs_discovered": 4, 00:12:33.984 "num_base_bdevs_operational": 4, 00:12:33.984 "base_bdevs_list": [ 00:12:33.984 { 00:12:33.984 "name": "pt1", 00:12:33.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:33.984 "is_configured": true, 00:12:33.984 "data_offset": 2048, 00:12:33.984 "data_size": 63488 00:12:33.984 }, 00:12:33.984 { 00:12:33.984 "name": "pt2", 00:12:33.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.984 "is_configured": true, 00:12:33.984 "data_offset": 2048, 00:12:33.984 "data_size": 63488 00:12:33.984 }, 00:12:33.984 { 00:12:33.984 "name": "pt3", 00:12:33.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:33.984 "is_configured": true, 00:12:33.984 "data_offset": 2048, 00:12:33.984 "data_size": 63488 00:12:33.984 }, 00:12:33.984 { 00:12:33.984 "name": "pt4", 00:12:33.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:33.984 "is_configured": true, 00:12:33.984 "data_offset": 2048, 00:12:33.984 "data_size": 63488 00:12:33.984 } 00:12:33.984 ] 00:12:33.984 }' 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.984 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.243 [2024-11-17 01:32:42.677620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.243 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.501 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.501 "name": "raid_bdev1", 00:12:34.501 "aliases": [ 00:12:34.501 "7dd13523-a4c1-4c2c-948f-e2a206dcaa81" 00:12:34.501 ], 00:12:34.501 "product_name": "Raid Volume", 00:12:34.501 "block_size": 512, 00:12:34.501 "num_blocks": 63488, 00:12:34.501 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:34.501 "assigned_rate_limits": { 00:12:34.501 "rw_ios_per_sec": 0, 00:12:34.501 "rw_mbytes_per_sec": 0, 00:12:34.501 "r_mbytes_per_sec": 0, 00:12:34.501 "w_mbytes_per_sec": 0 00:12:34.501 }, 00:12:34.501 "claimed": false, 00:12:34.501 "zoned": false, 00:12:34.502 "supported_io_types": { 00:12:34.502 "read": true, 00:12:34.502 "write": true, 00:12:34.502 "unmap": false, 00:12:34.502 "flush": false, 00:12:34.502 "reset": true, 00:12:34.502 "nvme_admin": false, 00:12:34.502 "nvme_io": false, 00:12:34.502 "nvme_io_md": false, 00:12:34.502 "write_zeroes": true, 00:12:34.502 "zcopy": false, 00:12:34.502 "get_zone_info": false, 00:12:34.502 "zone_management": false, 00:12:34.502 "zone_append": false, 00:12:34.502 "compare": false, 00:12:34.502 "compare_and_write": false, 00:12:34.502 "abort": false, 00:12:34.502 "seek_hole": false, 00:12:34.502 "seek_data": false, 00:12:34.502 "copy": false, 00:12:34.502 "nvme_iov_md": false 00:12:34.502 }, 00:12:34.502 "memory_domains": [ 00:12:34.502 { 00:12:34.502 "dma_device_id": "system", 00:12:34.502 "dma_device_type": 1 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.502 "dma_device_type": 2 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "dma_device_id": "system", 00:12:34.502 "dma_device_type": 1 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.502 "dma_device_type": 2 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "dma_device_id": "system", 00:12:34.502 "dma_device_type": 1 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.502 "dma_device_type": 2 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "dma_device_id": "system", 00:12:34.502 "dma_device_type": 1 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.502 "dma_device_type": 2 00:12:34.502 } 00:12:34.502 ], 00:12:34.502 "driver_specific": { 00:12:34.502 "raid": { 00:12:34.502 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:34.502 "strip_size_kb": 0, 00:12:34.502 "state": "online", 00:12:34.502 "raid_level": "raid1", 00:12:34.502 "superblock": true, 00:12:34.502 "num_base_bdevs": 4, 00:12:34.502 "num_base_bdevs_discovered": 4, 00:12:34.502 "num_base_bdevs_operational": 4, 00:12:34.502 "base_bdevs_list": [ 00:12:34.502 { 00:12:34.502 "name": "pt1", 00:12:34.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.502 "is_configured": true, 00:12:34.502 "data_offset": 2048, 00:12:34.502 "data_size": 63488 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "name": "pt2", 00:12:34.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.502 "is_configured": true, 00:12:34.502 "data_offset": 2048, 00:12:34.502 "data_size": 63488 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "name": "pt3", 00:12:34.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.502 "is_configured": true, 00:12:34.502 "data_offset": 2048, 00:12:34.502 "data_size": 63488 00:12:34.502 }, 00:12:34.502 { 00:12:34.502 "name": "pt4", 00:12:34.502 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:34.502 "is_configured": true, 00:12:34.502 "data_offset": 2048, 00:12:34.502 "data_size": 63488 00:12:34.502 } 00:12:34.502 ] 00:12:34.502 } 00:12:34.502 } 00:12:34.502 }' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:34.502 pt2 00:12:34.502 pt3 00:12:34.502 pt4' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.502 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.761 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.761 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.761 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.761 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:34.761 01:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.761 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.761 01:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.761 [2024-11-17 01:32:42.981026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7dd13523-a4c1-4c2c-948f-e2a206dcaa81 '!=' 7dd13523-a4c1-4c2c-948f-e2a206dcaa81 ']' 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.761 [2024-11-17 01:32:43.020730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.761 "name": "raid_bdev1", 00:12:34.761 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:34.761 "strip_size_kb": 0, 00:12:34.761 "state": "online", 00:12:34.761 "raid_level": "raid1", 00:12:34.761 "superblock": true, 00:12:34.761 "num_base_bdevs": 4, 00:12:34.761 "num_base_bdevs_discovered": 3, 00:12:34.761 "num_base_bdevs_operational": 3, 00:12:34.761 "base_bdevs_list": [ 00:12:34.761 { 00:12:34.761 "name": null, 00:12:34.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.761 "is_configured": false, 00:12:34.761 "data_offset": 0, 00:12:34.761 "data_size": 63488 00:12:34.761 }, 00:12:34.761 { 00:12:34.761 "name": "pt2", 00:12:34.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.761 "is_configured": true, 00:12:34.761 "data_offset": 2048, 00:12:34.761 "data_size": 63488 00:12:34.761 }, 00:12:34.761 { 00:12:34.761 "name": "pt3", 00:12:34.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.761 "is_configured": true, 00:12:34.761 "data_offset": 2048, 00:12:34.761 "data_size": 63488 00:12:34.761 }, 00:12:34.761 { 00:12:34.761 "name": "pt4", 00:12:34.761 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:34.761 "is_configured": true, 00:12:34.761 "data_offset": 2048, 00:12:34.761 "data_size": 63488 00:12:34.761 } 00:12:34.761 ] 00:12:34.761 }' 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.761 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.020 [2024-11-17 01:32:43.451973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.020 [2024-11-17 01:32:43.452012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.020 [2024-11-17 01:32:43.452100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.020 [2024-11-17 01:32:43.452181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.020 [2024-11-17 01:32:43.452197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.020 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.279 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.279 [2024-11-17 01:32:43.547807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:35.279 [2024-11-17 01:32:43.547856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.279 [2024-11-17 01:32:43.547875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:35.279 [2024-11-17 01:32:43.547884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.279 [2024-11-17 01:32:43.550289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.279 [2024-11-17 01:32:43.550321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:35.279 [2024-11-17 01:32:43.550403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:35.279 [2024-11-17 01:32:43.550456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:35.280 pt2 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.280 "name": "raid_bdev1", 00:12:35.280 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:35.280 "strip_size_kb": 0, 00:12:35.280 "state": "configuring", 00:12:35.280 "raid_level": "raid1", 00:12:35.280 "superblock": true, 00:12:35.280 "num_base_bdevs": 4, 00:12:35.280 "num_base_bdevs_discovered": 1, 00:12:35.280 "num_base_bdevs_operational": 3, 00:12:35.280 "base_bdevs_list": [ 00:12:35.280 { 00:12:35.280 "name": null, 00:12:35.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.280 "is_configured": false, 00:12:35.280 "data_offset": 2048, 00:12:35.280 "data_size": 63488 00:12:35.280 }, 00:12:35.280 { 00:12:35.280 "name": "pt2", 00:12:35.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.280 "is_configured": true, 00:12:35.280 "data_offset": 2048, 00:12:35.280 "data_size": 63488 00:12:35.280 }, 00:12:35.280 { 00:12:35.280 "name": null, 00:12:35.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.280 "is_configured": false, 00:12:35.280 "data_offset": 2048, 00:12:35.280 "data_size": 63488 00:12:35.280 }, 00:12:35.280 { 00:12:35.280 "name": null, 00:12:35.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:35.280 "is_configured": false, 00:12:35.280 "data_offset": 2048, 00:12:35.280 "data_size": 63488 00:12:35.280 } 00:12:35.280 ] 00:12:35.280 }' 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.280 01:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 [2024-11-17 01:32:44.043010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:35.848 [2024-11-17 01:32:44.043103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.848 [2024-11-17 01:32:44.043142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:35.848 [2024-11-17 01:32:44.043153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.848 [2024-11-17 01:32:44.043652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.848 [2024-11-17 01:32:44.043675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:35.848 [2024-11-17 01:32:44.043775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:35.848 [2024-11-17 01:32:44.043806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:35.848 pt3 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.848 "name": "raid_bdev1", 00:12:35.848 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:35.848 "strip_size_kb": 0, 00:12:35.848 "state": "configuring", 00:12:35.848 "raid_level": "raid1", 00:12:35.848 "superblock": true, 00:12:35.848 "num_base_bdevs": 4, 00:12:35.848 "num_base_bdevs_discovered": 2, 00:12:35.848 "num_base_bdevs_operational": 3, 00:12:35.848 "base_bdevs_list": [ 00:12:35.848 { 00:12:35.848 "name": null, 00:12:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.848 "is_configured": false, 00:12:35.848 "data_offset": 2048, 00:12:35.848 "data_size": 63488 00:12:35.848 }, 00:12:35.848 { 00:12:35.848 "name": "pt2", 00:12:35.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.848 "is_configured": true, 00:12:35.848 "data_offset": 2048, 00:12:35.848 "data_size": 63488 00:12:35.848 }, 00:12:35.848 { 00:12:35.848 "name": "pt3", 00:12:35.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.848 "is_configured": true, 00:12:35.848 "data_offset": 2048, 00:12:35.848 "data_size": 63488 00:12:35.848 }, 00:12:35.848 { 00:12:35.848 "name": null, 00:12:35.848 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:35.848 "is_configured": false, 00:12:35.848 "data_offset": 2048, 00:12:35.848 "data_size": 63488 00:12:35.848 } 00:12:35.848 ] 00:12:35.848 }' 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.848 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.107 [2024-11-17 01:32:44.510216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:36.107 [2024-11-17 01:32:44.510291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.107 [2024-11-17 01:32:44.510314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:36.107 [2024-11-17 01:32:44.510324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.107 [2024-11-17 01:32:44.510816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.107 [2024-11-17 01:32:44.510840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:36.107 [2024-11-17 01:32:44.510931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:36.107 [2024-11-17 01:32:44.510975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:36.107 [2024-11-17 01:32:44.511152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:36.107 [2024-11-17 01:32:44.511168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:36.107 [2024-11-17 01:32:44.511460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:36.107 [2024-11-17 01:32:44.511628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:36.107 [2024-11-17 01:32:44.511648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:36.107 [2024-11-17 01:32:44.511822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.107 pt4 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.107 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.366 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.366 "name": "raid_bdev1", 00:12:36.366 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:36.366 "strip_size_kb": 0, 00:12:36.366 "state": "online", 00:12:36.366 "raid_level": "raid1", 00:12:36.366 "superblock": true, 00:12:36.366 "num_base_bdevs": 4, 00:12:36.366 "num_base_bdevs_discovered": 3, 00:12:36.366 "num_base_bdevs_operational": 3, 00:12:36.366 "base_bdevs_list": [ 00:12:36.366 { 00:12:36.366 "name": null, 00:12:36.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.366 "is_configured": false, 00:12:36.366 "data_offset": 2048, 00:12:36.366 "data_size": 63488 00:12:36.366 }, 00:12:36.366 { 00:12:36.366 "name": "pt2", 00:12:36.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.366 "is_configured": true, 00:12:36.366 "data_offset": 2048, 00:12:36.366 "data_size": 63488 00:12:36.366 }, 00:12:36.366 { 00:12:36.366 "name": "pt3", 00:12:36.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.366 "is_configured": true, 00:12:36.366 "data_offset": 2048, 00:12:36.366 "data_size": 63488 00:12:36.366 }, 00:12:36.366 { 00:12:36.366 "name": "pt4", 00:12:36.366 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:36.366 "is_configured": true, 00:12:36.366 "data_offset": 2048, 00:12:36.366 "data_size": 63488 00:12:36.366 } 00:12:36.366 ] 00:12:36.366 }' 00:12:36.366 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.366 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.625 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.625 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.626 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.626 [2024-11-17 01:32:44.949396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.626 [2024-11-17 01:32:44.949449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.626 [2024-11-17 01:32:44.949531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.626 [2024-11-17 01:32:44.949606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.626 [2024-11-17 01:32:44.949625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:36.626 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.626 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.626 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.626 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.626 01:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:36.626 01:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.626 [2024-11-17 01:32:45.025247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:36.626 [2024-11-17 01:32:45.025306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.626 [2024-11-17 01:32:45.025323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:36.626 [2024-11-17 01:32:45.025334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.626 [2024-11-17 01:32:45.027899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.626 [2024-11-17 01:32:45.027935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:36.626 [2024-11-17 01:32:45.028018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:36.626 [2024-11-17 01:32:45.028069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:36.626 [2024-11-17 01:32:45.028198] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:36.626 [2024-11-17 01:32:45.028217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.626 [2024-11-17 01:32:45.028233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:36.626 [2024-11-17 01:32:45.028306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:36.626 [2024-11-17 01:32:45.028436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:36.626 pt1 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.626 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.885 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.885 "name": "raid_bdev1", 00:12:36.885 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:36.885 "strip_size_kb": 0, 00:12:36.885 "state": "configuring", 00:12:36.885 "raid_level": "raid1", 00:12:36.885 "superblock": true, 00:12:36.885 "num_base_bdevs": 4, 00:12:36.885 "num_base_bdevs_discovered": 2, 00:12:36.885 "num_base_bdevs_operational": 3, 00:12:36.885 "base_bdevs_list": [ 00:12:36.885 { 00:12:36.885 "name": null, 00:12:36.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.885 "is_configured": false, 00:12:36.885 "data_offset": 2048, 00:12:36.885 "data_size": 63488 00:12:36.885 }, 00:12:36.885 { 00:12:36.885 "name": "pt2", 00:12:36.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.885 "is_configured": true, 00:12:36.885 "data_offset": 2048, 00:12:36.885 "data_size": 63488 00:12:36.885 }, 00:12:36.885 { 00:12:36.885 "name": "pt3", 00:12:36.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:36.885 "is_configured": true, 00:12:36.885 "data_offset": 2048, 00:12:36.885 "data_size": 63488 00:12:36.885 }, 00:12:36.885 { 00:12:36.885 "name": null, 00:12:36.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:36.885 "is_configured": false, 00:12:36.885 "data_offset": 2048, 00:12:36.885 "data_size": 63488 00:12:36.885 } 00:12:36.885 ] 00:12:36.885 }' 00:12:36.885 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.885 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.144 [2024-11-17 01:32:45.580352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:37.144 [2024-11-17 01:32:45.580426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.144 [2024-11-17 01:32:45.580451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:37.144 [2024-11-17 01:32:45.580461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.144 [2024-11-17 01:32:45.580960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.144 [2024-11-17 01:32:45.580988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:37.144 [2024-11-17 01:32:45.581078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:37.144 [2024-11-17 01:32:45.581121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:37.144 [2024-11-17 01:32:45.581275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:37.144 [2024-11-17 01:32:45.581290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.144 [2024-11-17 01:32:45.581562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:37.144 [2024-11-17 01:32:45.581726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:37.144 [2024-11-17 01:32:45.581743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:37.144 [2024-11-17 01:32:45.581910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.144 pt4 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.144 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.410 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.410 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.410 "name": "raid_bdev1", 00:12:37.410 "uuid": "7dd13523-a4c1-4c2c-948f-e2a206dcaa81", 00:12:37.410 "strip_size_kb": 0, 00:12:37.410 "state": "online", 00:12:37.410 "raid_level": "raid1", 00:12:37.410 "superblock": true, 00:12:37.410 "num_base_bdevs": 4, 00:12:37.410 "num_base_bdevs_discovered": 3, 00:12:37.410 "num_base_bdevs_operational": 3, 00:12:37.410 "base_bdevs_list": [ 00:12:37.410 { 00:12:37.410 "name": null, 00:12:37.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.410 "is_configured": false, 00:12:37.410 "data_offset": 2048, 00:12:37.410 "data_size": 63488 00:12:37.410 }, 00:12:37.410 { 00:12:37.410 "name": "pt2", 00:12:37.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.410 "is_configured": true, 00:12:37.410 "data_offset": 2048, 00:12:37.410 "data_size": 63488 00:12:37.410 }, 00:12:37.410 { 00:12:37.410 "name": "pt3", 00:12:37.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:37.410 "is_configured": true, 00:12:37.410 "data_offset": 2048, 00:12:37.410 "data_size": 63488 00:12:37.410 }, 00:12:37.410 { 00:12:37.410 "name": "pt4", 00:12:37.410 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:37.410 "is_configured": true, 00:12:37.410 "data_offset": 2048, 00:12:37.410 "data_size": 63488 00:12:37.410 } 00:12:37.410 ] 00:12:37.410 }' 00:12:37.410 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.410 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.685 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:37.685 01:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.685 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.685 01:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:37.685 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.685 01:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:37.686 [2024-11-17 01:32:46.047920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7dd13523-a4c1-4c2c-948f-e2a206dcaa81 '!=' 7dd13523-a4c1-4c2c-948f-e2a206dcaa81 ']' 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74285 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74285 ']' 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74285 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74285 00:12:37.686 killing process with pid 74285 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74285' 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74285 00:12:37.686 [2024-11-17 01:32:46.132235] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.686 [2024-11-17 01:32:46.132334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.686 01:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74285 00:12:37.686 [2024-11-17 01:32:46.132412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.686 [2024-11-17 01:32:46.132427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:38.254 [2024-11-17 01:32:46.564828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.633 01:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:39.633 00:12:39.633 real 0m8.660s 00:12:39.633 user 0m13.437s 00:12:39.633 sys 0m1.674s 00:12:39.633 01:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.633 01:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.633 ************************************ 00:12:39.633 END TEST raid_superblock_test 00:12:39.633 ************************************ 00:12:39.633 01:32:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:39.633 01:32:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:39.633 01:32:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.633 01:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.633 ************************************ 00:12:39.633 START TEST raid_read_error_test 00:12:39.633 ************************************ 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L3tA3N4IRN 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74778 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74778 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74778 ']' 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.633 01:32:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.633 [2024-11-17 01:32:47.944064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:39.633 [2024-11-17 01:32:47.944205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74778 ] 00:12:39.893 [2024-11-17 01:32:48.119197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.893 [2024-11-17 01:32:48.253217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.152 [2024-11-17 01:32:48.483329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.152 [2024-11-17 01:32:48.483375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.411 BaseBdev1_malloc 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:40.411 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.412 true 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.412 [2024-11-17 01:32:48.841732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:40.412 [2024-11-17 01:32:48.841804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.412 [2024-11-17 01:32:48.841823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:40.412 [2024-11-17 01:32:48.841834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.412 [2024-11-17 01:32:48.844159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.412 [2024-11-17 01:32:48.844194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.412 BaseBdev1 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.412 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 BaseBdev2_malloc 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 true 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 [2024-11-17 01:32:48.913027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:40.671 [2024-11-17 01:32:48.913081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.671 [2024-11-17 01:32:48.913097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:40.671 [2024-11-17 01:32:48.913108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.671 [2024-11-17 01:32:48.915488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.671 [2024-11-17 01:32:48.915523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.671 BaseBdev2 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 BaseBdev3_malloc 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 true 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 01:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 [2024-11-17 01:32:49.002544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:40.671 [2024-11-17 01:32:49.002604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.671 [2024-11-17 01:32:49.002624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:40.671 [2024-11-17 01:32:49.002635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.671 [2024-11-17 01:32:49.005087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.671 [2024-11-17 01:32:49.005125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:40.671 BaseBdev3 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 BaseBdev4_malloc 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.671 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.671 true 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.672 [2024-11-17 01:32:49.074174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:40.672 [2024-11-17 01:32:49.074227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.672 [2024-11-17 01:32:49.074243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:40.672 [2024-11-17 01:32:49.074253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.672 [2024-11-17 01:32:49.076679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.672 [2024-11-17 01:32:49.076715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:40.672 BaseBdev4 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.672 [2024-11-17 01:32:49.086212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.672 [2024-11-17 01:32:49.088307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.672 [2024-11-17 01:32:49.088385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.672 [2024-11-17 01:32:49.088447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:40.672 [2024-11-17 01:32:49.088679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:40.672 [2024-11-17 01:32:49.088699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.672 [2024-11-17 01:32:49.088959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:40.672 [2024-11-17 01:32:49.089129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:40.672 [2024-11-17 01:32:49.089144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:40.672 [2024-11-17 01:32:49.089296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.672 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.930 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.930 "name": "raid_bdev1", 00:12:40.930 "uuid": "ace5a644-248f-4548-97c2-9cd540b1bbab", 00:12:40.930 "strip_size_kb": 0, 00:12:40.930 "state": "online", 00:12:40.930 "raid_level": "raid1", 00:12:40.930 "superblock": true, 00:12:40.930 "num_base_bdevs": 4, 00:12:40.930 "num_base_bdevs_discovered": 4, 00:12:40.930 "num_base_bdevs_operational": 4, 00:12:40.930 "base_bdevs_list": [ 00:12:40.930 { 00:12:40.930 "name": "BaseBdev1", 00:12:40.930 "uuid": "4fb2457e-e903-5e31-8b28-cdd87c7bba58", 00:12:40.930 "is_configured": true, 00:12:40.930 "data_offset": 2048, 00:12:40.930 "data_size": 63488 00:12:40.930 }, 00:12:40.930 { 00:12:40.930 "name": "BaseBdev2", 00:12:40.930 "uuid": "5220b509-f05c-5066-b2de-cc7288d9940f", 00:12:40.930 "is_configured": true, 00:12:40.930 "data_offset": 2048, 00:12:40.930 "data_size": 63488 00:12:40.930 }, 00:12:40.930 { 00:12:40.930 "name": "BaseBdev3", 00:12:40.930 "uuid": "2f04bf96-bd67-54ce-98ae-ca526a1bae49", 00:12:40.930 "is_configured": true, 00:12:40.930 "data_offset": 2048, 00:12:40.930 "data_size": 63488 00:12:40.930 }, 00:12:40.930 { 00:12:40.930 "name": "BaseBdev4", 00:12:40.930 "uuid": "0a8b2a50-8a8a-51af-b9ae-c67cb0aa3125", 00:12:40.930 "is_configured": true, 00:12:40.930 "data_offset": 2048, 00:12:40.930 "data_size": 63488 00:12:40.930 } 00:12:40.930 ] 00:12:40.930 }' 00:12:40.930 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.930 01:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.189 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:41.189 01:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:41.189 [2024-11-17 01:32:49.622626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.127 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.387 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.387 "name": "raid_bdev1", 00:12:42.387 "uuid": "ace5a644-248f-4548-97c2-9cd540b1bbab", 00:12:42.387 "strip_size_kb": 0, 00:12:42.387 "state": "online", 00:12:42.387 "raid_level": "raid1", 00:12:42.387 "superblock": true, 00:12:42.387 "num_base_bdevs": 4, 00:12:42.387 "num_base_bdevs_discovered": 4, 00:12:42.387 "num_base_bdevs_operational": 4, 00:12:42.387 "base_bdevs_list": [ 00:12:42.387 { 00:12:42.387 "name": "BaseBdev1", 00:12:42.387 "uuid": "4fb2457e-e903-5e31-8b28-cdd87c7bba58", 00:12:42.387 "is_configured": true, 00:12:42.387 "data_offset": 2048, 00:12:42.387 "data_size": 63488 00:12:42.387 }, 00:12:42.387 { 00:12:42.387 "name": "BaseBdev2", 00:12:42.387 "uuid": "5220b509-f05c-5066-b2de-cc7288d9940f", 00:12:42.387 "is_configured": true, 00:12:42.387 "data_offset": 2048, 00:12:42.387 "data_size": 63488 00:12:42.387 }, 00:12:42.387 { 00:12:42.387 "name": "BaseBdev3", 00:12:42.387 "uuid": "2f04bf96-bd67-54ce-98ae-ca526a1bae49", 00:12:42.387 "is_configured": true, 00:12:42.387 "data_offset": 2048, 00:12:42.387 "data_size": 63488 00:12:42.387 }, 00:12:42.387 { 00:12:42.387 "name": "BaseBdev4", 00:12:42.387 "uuid": "0a8b2a50-8a8a-51af-b9ae-c67cb0aa3125", 00:12:42.387 "is_configured": true, 00:12:42.387 "data_offset": 2048, 00:12:42.387 "data_size": 63488 00:12:42.387 } 00:12:42.387 ] 00:12:42.387 }' 00:12:42.387 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.387 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.647 [2024-11-17 01:32:50.962616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.647 [2024-11-17 01:32:50.962669] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.647 [2024-11-17 01:32:50.965233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.647 [2024-11-17 01:32:50.965305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.647 [2024-11-17 01:32:50.965433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.647 [2024-11-17 01:32:50.965448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:42.647 { 00:12:42.647 "results": [ 00:12:42.647 { 00:12:42.647 "job": "raid_bdev1", 00:12:42.647 "core_mask": "0x1", 00:12:42.647 "workload": "randrw", 00:12:42.647 "percentage": 50, 00:12:42.647 "status": "finished", 00:12:42.647 "queue_depth": 1, 00:12:42.647 "io_size": 131072, 00:12:42.647 "runtime": 1.340422, 00:12:42.647 "iops": 8136.989694290306, 00:12:42.647 "mibps": 1017.1237117862883, 00:12:42.647 "io_failed": 0, 00:12:42.647 "io_timeout": 0, 00:12:42.647 "avg_latency_us": 120.38035266803138, 00:12:42.647 "min_latency_us": 22.134497816593885, 00:12:42.647 "max_latency_us": 1559.6995633187773 00:12:42.647 } 00:12:42.647 ], 00:12:42.647 "core_count": 1 00:12:42.647 } 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74778 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74778 ']' 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74778 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.647 01:32:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74778 00:12:42.647 01:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.647 01:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.647 killing process with pid 74778 00:12:42.647 01:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74778' 00:12:42.647 01:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74778 00:12:42.647 [2024-11-17 01:32:51.007161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.647 01:32:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74778 00:12:43.216 [2024-11-17 01:32:51.365301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L3tA3N4IRN 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:44.154 00:12:44.154 real 0m4.776s 00:12:44.154 user 0m5.499s 00:12:44.154 sys 0m0.655s 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.154 01:32:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.154 ************************************ 00:12:44.154 END TEST raid_read_error_test 00:12:44.154 ************************************ 00:12:44.414 01:32:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:44.414 01:32:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:44.414 01:32:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.414 01:32:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.414 ************************************ 00:12:44.414 START TEST raid_write_error_test 00:12:44.414 ************************************ 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0hr18nhvC3 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74929 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74929 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74929 ']' 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.414 01:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.415 01:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.415 01:32:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.415 [2024-11-17 01:32:52.794171] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:44.415 [2024-11-17 01:32:52.794296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74929 ] 00:12:44.674 [2024-11-17 01:32:52.953141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.674 [2024-11-17 01:32:53.083470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.934 [2024-11-17 01:32:53.312860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.934 [2024-11-17 01:32:53.312927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.194 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.194 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:45.194 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:45.194 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:45.194 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.194 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.454 BaseBdev1_malloc 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.454 true 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.454 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.454 [2024-11-17 01:32:53.687679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:45.454 [2024-11-17 01:32:53.687748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.454 [2024-11-17 01:32:53.687781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:45.454 [2024-11-17 01:32:53.687793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.454 [2024-11-17 01:32:53.690112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.454 [2024-11-17 01:32:53.690146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.454 BaseBdev1 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.455 BaseBdev2_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.455 true 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.455 [2024-11-17 01:32:53.760104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:45.455 [2024-11-17 01:32:53.760166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.455 [2024-11-17 01:32:53.760182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:45.455 [2024-11-17 01:32:53.760193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.455 [2024-11-17 01:32:53.762434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.455 [2024-11-17 01:32:53.762468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.455 BaseBdev2 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.455 BaseBdev3_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.455 true 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.455 [2024-11-17 01:32:53.847673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:45.455 [2024-11-17 01:32:53.847733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.455 [2024-11-17 01:32:53.847752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:45.455 [2024-11-17 01:32:53.847774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.455 [2024-11-17 01:32:53.850105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.455 [2024-11-17 01:32:53.850139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:45.455 BaseBdev3 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.455 BaseBdev4_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.455 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.715 true 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.716 [2024-11-17 01:32:53.920909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:45.716 [2024-11-17 01:32:53.920970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.716 [2024-11-17 01:32:53.920988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:45.716 [2024-11-17 01:32:53.920999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.716 [2024-11-17 01:32:53.923343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.716 [2024-11-17 01:32:53.923383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:45.716 BaseBdev4 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.716 [2024-11-17 01:32:53.932959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.716 [2024-11-17 01:32:53.934910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.716 [2024-11-17 01:32:53.934982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.716 [2024-11-17 01:32:53.935043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:45.716 [2024-11-17 01:32:53.935292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:45.716 [2024-11-17 01:32:53.935313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.716 [2024-11-17 01:32:53.935561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:45.716 [2024-11-17 01:32:53.935748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:45.716 [2024-11-17 01:32:53.935775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:45.716 [2024-11-17 01:32:53.935937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.716 "name": "raid_bdev1", 00:12:45.716 "uuid": "51f719db-2f51-4152-b176-483a62adc9d9", 00:12:45.716 "strip_size_kb": 0, 00:12:45.716 "state": "online", 00:12:45.716 "raid_level": "raid1", 00:12:45.716 "superblock": true, 00:12:45.716 "num_base_bdevs": 4, 00:12:45.716 "num_base_bdevs_discovered": 4, 00:12:45.716 "num_base_bdevs_operational": 4, 00:12:45.716 "base_bdevs_list": [ 00:12:45.716 { 00:12:45.716 "name": "BaseBdev1", 00:12:45.716 "uuid": "f941bd13-cab4-5bb6-bd2c-368d64585ea1", 00:12:45.716 "is_configured": true, 00:12:45.716 "data_offset": 2048, 00:12:45.716 "data_size": 63488 00:12:45.716 }, 00:12:45.716 { 00:12:45.716 "name": "BaseBdev2", 00:12:45.716 "uuid": "2b97daff-4421-51a8-84ee-7185118beb19", 00:12:45.716 "is_configured": true, 00:12:45.716 "data_offset": 2048, 00:12:45.716 "data_size": 63488 00:12:45.716 }, 00:12:45.716 { 00:12:45.716 "name": "BaseBdev3", 00:12:45.716 "uuid": "a360007d-b52e-5d5e-8c15-fd7e6ec68198", 00:12:45.716 "is_configured": true, 00:12:45.716 "data_offset": 2048, 00:12:45.716 "data_size": 63488 00:12:45.716 }, 00:12:45.716 { 00:12:45.716 "name": "BaseBdev4", 00:12:45.716 "uuid": "9c02293c-c9e2-528b-af4a-c4b0812886a2", 00:12:45.716 "is_configured": true, 00:12:45.716 "data_offset": 2048, 00:12:45.716 "data_size": 63488 00:12:45.716 } 00:12:45.716 ] 00:12:45.716 }' 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.716 01:32:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.977 01:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:45.977 01:32:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:46.237 [2024-11-17 01:32:54.497474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.177 [2024-11-17 01:32:55.421235] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:47.177 [2024-11-17 01:32:55.421313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.177 [2024-11-17 01:32:55.421565] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.177 "name": "raid_bdev1", 00:12:47.177 "uuid": "51f719db-2f51-4152-b176-483a62adc9d9", 00:12:47.177 "strip_size_kb": 0, 00:12:47.177 "state": "online", 00:12:47.177 "raid_level": "raid1", 00:12:47.177 "superblock": true, 00:12:47.177 "num_base_bdevs": 4, 00:12:47.177 "num_base_bdevs_discovered": 3, 00:12:47.177 "num_base_bdevs_operational": 3, 00:12:47.177 "base_bdevs_list": [ 00:12:47.177 { 00:12:47.177 "name": null, 00:12:47.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.177 "is_configured": false, 00:12:47.177 "data_offset": 0, 00:12:47.177 "data_size": 63488 00:12:47.177 }, 00:12:47.177 { 00:12:47.177 "name": "BaseBdev2", 00:12:47.177 "uuid": "2b97daff-4421-51a8-84ee-7185118beb19", 00:12:47.177 "is_configured": true, 00:12:47.177 "data_offset": 2048, 00:12:47.177 "data_size": 63488 00:12:47.177 }, 00:12:47.177 { 00:12:47.177 "name": "BaseBdev3", 00:12:47.177 "uuid": "a360007d-b52e-5d5e-8c15-fd7e6ec68198", 00:12:47.177 "is_configured": true, 00:12:47.177 "data_offset": 2048, 00:12:47.177 "data_size": 63488 00:12:47.177 }, 00:12:47.177 { 00:12:47.177 "name": "BaseBdev4", 00:12:47.177 "uuid": "9c02293c-c9e2-528b-af4a-c4b0812886a2", 00:12:47.177 "is_configured": true, 00:12:47.177 "data_offset": 2048, 00:12:47.177 "data_size": 63488 00:12:47.177 } 00:12:47.177 ] 00:12:47.177 }' 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.177 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.436 [2024-11-17 01:32:55.875242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.436 [2024-11-17 01:32:55.875292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.436 [2024-11-17 01:32:55.878013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.436 [2024-11-17 01:32:55.878066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.436 [2024-11-17 01:32:55.878186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.436 [2024-11-17 01:32:55.878198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:47.436 { 00:12:47.436 "results": [ 00:12:47.436 { 00:12:47.436 "job": "raid_bdev1", 00:12:47.436 "core_mask": "0x1", 00:12:47.436 "workload": "randrw", 00:12:47.436 "percentage": 50, 00:12:47.436 "status": "finished", 00:12:47.436 "queue_depth": 1, 00:12:47.436 "io_size": 131072, 00:12:47.436 "runtime": 1.378228, 00:12:47.436 "iops": 8861.37852372757, 00:12:47.436 "mibps": 1107.6723154659462, 00:12:47.436 "io_failed": 0, 00:12:47.436 "io_timeout": 0, 00:12:47.436 "avg_latency_us": 110.3419142820468, 00:12:47.436 "min_latency_us": 22.91703056768559, 00:12:47.436 "max_latency_us": 1287.825327510917 00:12:47.436 } 00:12:47.436 ], 00:12:47.436 "core_count": 1 00:12:47.436 } 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74929 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74929 ']' 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74929 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.436 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74929 00:12:47.695 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.695 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.695 killing process with pid 74929 00:12:47.695 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74929' 00:12:47.695 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74929 00:12:47.695 [2024-11-17 01:32:55.922453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.695 01:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74929 00:12:47.954 [2024-11-17 01:32:56.295498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0hr18nhvC3 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:49.334 00:12:49.334 real 0m4.865s 00:12:49.334 user 0m5.627s 00:12:49.334 sys 0m0.693s 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.334 01:32:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.334 ************************************ 00:12:49.334 END TEST raid_write_error_test 00:12:49.334 ************************************ 00:12:49.334 01:32:57 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:49.334 01:32:57 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:49.334 01:32:57 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:49.334 01:32:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:49.334 01:32:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.334 01:32:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.334 ************************************ 00:12:49.334 START TEST raid_rebuild_test 00:12:49.334 ************************************ 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75069 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75069 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75069 ']' 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.334 01:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.334 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:49.334 Zero copy mechanism will not be used. 00:12:49.334 [2024-11-17 01:32:57.728237] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:49.334 [2024-11-17 01:32:57.728362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75069 ] 00:12:49.593 [2024-11-17 01:32:57.905185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.593 [2024-11-17 01:32:58.048550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.853 [2024-11-17 01:32:58.294301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.853 [2024-11-17 01:32:58.294376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.113 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.113 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:50.113 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:50.113 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:50.113 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.113 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 BaseBdev1_malloc 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 [2024-11-17 01:32:58.592817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:50.374 [2024-11-17 01:32:58.592903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.374 [2024-11-17 01:32:58.592925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:50.374 [2024-11-17 01:32:58.592937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.374 [2024-11-17 01:32:58.595237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.374 [2024-11-17 01:32:58.595275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:50.374 BaseBdev1 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 BaseBdev2_malloc 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 [2024-11-17 01:32:58.654455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:50.374 [2024-11-17 01:32:58.654527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.374 [2024-11-17 01:32:58.654547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:50.374 [2024-11-17 01:32:58.654561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.374 [2024-11-17 01:32:58.656868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.374 [2024-11-17 01:32:58.656905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:50.374 BaseBdev2 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 spare_malloc 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 spare_delay 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.374 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 [2024-11-17 01:32:58.741086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:50.374 [2024-11-17 01:32:58.741157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.375 [2024-11-17 01:32:58.741178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:50.375 [2024-11-17 01:32:58.741190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.375 [2024-11-17 01:32:58.743547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.375 [2024-11-17 01:32:58.743585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:50.375 spare 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.375 [2024-11-17 01:32:58.753122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.375 [2024-11-17 01:32:58.755183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.375 [2024-11-17 01:32:58.755272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:50.375 [2024-11-17 01:32:58.755286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:50.375 [2024-11-17 01:32:58.755533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:50.375 [2024-11-17 01:32:58.755697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:50.375 [2024-11-17 01:32:58.755716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:50.375 [2024-11-17 01:32:58.755879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.375 "name": "raid_bdev1", 00:12:50.375 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:50.375 "strip_size_kb": 0, 00:12:50.375 "state": "online", 00:12:50.375 "raid_level": "raid1", 00:12:50.375 "superblock": false, 00:12:50.375 "num_base_bdevs": 2, 00:12:50.375 "num_base_bdevs_discovered": 2, 00:12:50.375 "num_base_bdevs_operational": 2, 00:12:50.375 "base_bdevs_list": [ 00:12:50.375 { 00:12:50.375 "name": "BaseBdev1", 00:12:50.375 "uuid": "84d9af69-a0ba-5e7f-9b9a-c9d2e21a3e82", 00:12:50.375 "is_configured": true, 00:12:50.375 "data_offset": 0, 00:12:50.375 "data_size": 65536 00:12:50.375 }, 00:12:50.375 { 00:12:50.375 "name": "BaseBdev2", 00:12:50.375 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:50.375 "is_configured": true, 00:12:50.375 "data_offset": 0, 00:12:50.375 "data_size": 65536 00:12:50.375 } 00:12:50.375 ] 00:12:50.375 }' 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.375 01:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.944 [2024-11-17 01:32:59.220657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.944 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:51.204 [2024-11-17 01:32:59.468017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:51.204 /dev/nbd0 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.204 1+0 records in 00:12:51.204 1+0 records out 00:12:51.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421783 s, 9.7 MB/s 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:51.204 01:32:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:55.433 65536+0 records in 00:12:55.433 65536+0 records out 00:12:55.433 33554432 bytes (34 MB, 32 MiB) copied, 3.86469 s, 8.7 MB/s 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.433 [2024-11-17 01:33:03.616093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.433 [2024-11-17 01:33:03.632183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.433 "name": "raid_bdev1", 00:12:55.433 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:55.433 "strip_size_kb": 0, 00:12:55.433 "state": "online", 00:12:55.433 "raid_level": "raid1", 00:12:55.433 "superblock": false, 00:12:55.433 "num_base_bdevs": 2, 00:12:55.433 "num_base_bdevs_discovered": 1, 00:12:55.433 "num_base_bdevs_operational": 1, 00:12:55.433 "base_bdevs_list": [ 00:12:55.433 { 00:12:55.433 "name": null, 00:12:55.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.433 "is_configured": false, 00:12:55.433 "data_offset": 0, 00:12:55.433 "data_size": 65536 00:12:55.433 }, 00:12:55.433 { 00:12:55.433 "name": "BaseBdev2", 00:12:55.433 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:55.433 "is_configured": true, 00:12:55.433 "data_offset": 0, 00:12:55.433 "data_size": 65536 00:12:55.433 } 00:12:55.433 ] 00:12:55.433 }' 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.433 01:33:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.693 01:33:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:55.693 01:33:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.693 01:33:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.693 [2024-11-17 01:33:04.019507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.693 [2024-11-17 01:33:04.036940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:55.693 01:33:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.693 01:33:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:55.693 [2024-11-17 01:33:04.038778] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.632 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.892 "name": "raid_bdev1", 00:12:56.892 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:56.892 "strip_size_kb": 0, 00:12:56.892 "state": "online", 00:12:56.892 "raid_level": "raid1", 00:12:56.892 "superblock": false, 00:12:56.892 "num_base_bdevs": 2, 00:12:56.892 "num_base_bdevs_discovered": 2, 00:12:56.892 "num_base_bdevs_operational": 2, 00:12:56.892 "process": { 00:12:56.892 "type": "rebuild", 00:12:56.892 "target": "spare", 00:12:56.892 "progress": { 00:12:56.892 "blocks": 20480, 00:12:56.892 "percent": 31 00:12:56.892 } 00:12:56.892 }, 00:12:56.892 "base_bdevs_list": [ 00:12:56.892 { 00:12:56.892 "name": "spare", 00:12:56.892 "uuid": "ab18728d-c575-5b3e-8f75-e341c4202668", 00:12:56.892 "is_configured": true, 00:12:56.892 "data_offset": 0, 00:12:56.892 "data_size": 65536 00:12:56.892 }, 00:12:56.892 { 00:12:56.892 "name": "BaseBdev2", 00:12:56.892 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:56.892 "is_configured": true, 00:12:56.892 "data_offset": 0, 00:12:56.892 "data_size": 65536 00:12:56.892 } 00:12:56.892 ] 00:12:56.892 }' 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.892 [2024-11-17 01:33:05.206563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.892 [2024-11-17 01:33:05.244277] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.892 [2024-11-17 01:33:05.244369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.892 [2024-11-17 01:33:05.244383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.892 [2024-11-17 01:33:05.244392] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.892 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.893 "name": "raid_bdev1", 00:12:56.893 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:56.893 "strip_size_kb": 0, 00:12:56.893 "state": "online", 00:12:56.893 "raid_level": "raid1", 00:12:56.893 "superblock": false, 00:12:56.893 "num_base_bdevs": 2, 00:12:56.893 "num_base_bdevs_discovered": 1, 00:12:56.893 "num_base_bdevs_operational": 1, 00:12:56.893 "base_bdevs_list": [ 00:12:56.893 { 00:12:56.893 "name": null, 00:12:56.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.893 "is_configured": false, 00:12:56.893 "data_offset": 0, 00:12:56.893 "data_size": 65536 00:12:56.893 }, 00:12:56.893 { 00:12:56.893 "name": "BaseBdev2", 00:12:56.893 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:56.893 "is_configured": true, 00:12:56.893 "data_offset": 0, 00:12:56.893 "data_size": 65536 00:12:56.893 } 00:12:56.893 ] 00:12:56.893 }' 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.893 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.460 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.460 "name": "raid_bdev1", 00:12:57.460 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:57.460 "strip_size_kb": 0, 00:12:57.460 "state": "online", 00:12:57.460 "raid_level": "raid1", 00:12:57.460 "superblock": false, 00:12:57.460 "num_base_bdevs": 2, 00:12:57.460 "num_base_bdevs_discovered": 1, 00:12:57.460 "num_base_bdevs_operational": 1, 00:12:57.460 "base_bdevs_list": [ 00:12:57.461 { 00:12:57.461 "name": null, 00:12:57.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.461 "is_configured": false, 00:12:57.461 "data_offset": 0, 00:12:57.461 "data_size": 65536 00:12:57.461 }, 00:12:57.461 { 00:12:57.461 "name": "BaseBdev2", 00:12:57.461 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:57.461 "is_configured": true, 00:12:57.461 "data_offset": 0, 00:12:57.461 "data_size": 65536 00:12:57.461 } 00:12:57.461 ] 00:12:57.461 }' 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.461 [2024-11-17 01:33:05.815661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:57.461 [2024-11-17 01:33:05.831124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.461 01:33:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:57.461 [2024-11-17 01:33:05.832907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.397 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.397 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.397 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.397 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.398 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.398 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.398 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.398 01:33:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.398 01:33:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.657 "name": "raid_bdev1", 00:12:58.657 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:58.657 "strip_size_kb": 0, 00:12:58.657 "state": "online", 00:12:58.657 "raid_level": "raid1", 00:12:58.657 "superblock": false, 00:12:58.657 "num_base_bdevs": 2, 00:12:58.657 "num_base_bdevs_discovered": 2, 00:12:58.657 "num_base_bdevs_operational": 2, 00:12:58.657 "process": { 00:12:58.657 "type": "rebuild", 00:12:58.657 "target": "spare", 00:12:58.657 "progress": { 00:12:58.657 "blocks": 20480, 00:12:58.657 "percent": 31 00:12:58.657 } 00:12:58.657 }, 00:12:58.657 "base_bdevs_list": [ 00:12:58.657 { 00:12:58.657 "name": "spare", 00:12:58.657 "uuid": "ab18728d-c575-5b3e-8f75-e341c4202668", 00:12:58.657 "is_configured": true, 00:12:58.657 "data_offset": 0, 00:12:58.657 "data_size": 65536 00:12:58.657 }, 00:12:58.657 { 00:12:58.657 "name": "BaseBdev2", 00:12:58.657 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:58.657 "is_configured": true, 00:12:58.657 "data_offset": 0, 00:12:58.657 "data_size": 65536 00:12:58.657 } 00:12:58.657 ] 00:12:58.657 }' 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=360 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.657 01:33:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.657 01:33:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.657 "name": "raid_bdev1", 00:12:58.657 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:58.657 "strip_size_kb": 0, 00:12:58.657 "state": "online", 00:12:58.657 "raid_level": "raid1", 00:12:58.657 "superblock": false, 00:12:58.657 "num_base_bdevs": 2, 00:12:58.657 "num_base_bdevs_discovered": 2, 00:12:58.657 "num_base_bdevs_operational": 2, 00:12:58.657 "process": { 00:12:58.657 "type": "rebuild", 00:12:58.657 "target": "spare", 00:12:58.657 "progress": { 00:12:58.657 "blocks": 22528, 00:12:58.657 "percent": 34 00:12:58.657 } 00:12:58.657 }, 00:12:58.657 "base_bdevs_list": [ 00:12:58.657 { 00:12:58.657 "name": "spare", 00:12:58.657 "uuid": "ab18728d-c575-5b3e-8f75-e341c4202668", 00:12:58.657 "is_configured": true, 00:12:58.657 "data_offset": 0, 00:12:58.657 "data_size": 65536 00:12:58.657 }, 00:12:58.657 { 00:12:58.657 "name": "BaseBdev2", 00:12:58.657 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:58.657 "is_configured": true, 00:12:58.657 "data_offset": 0, 00:12:58.657 "data_size": 65536 00:12:58.657 } 00:12:58.657 ] 00:12:58.657 }' 00:12:58.657 01:33:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.657 01:33:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.657 01:33:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.915 01:33:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.915 01:33:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.905 "name": "raid_bdev1", 00:12:59.905 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:12:59.905 "strip_size_kb": 0, 00:12:59.905 "state": "online", 00:12:59.905 "raid_level": "raid1", 00:12:59.905 "superblock": false, 00:12:59.905 "num_base_bdevs": 2, 00:12:59.905 "num_base_bdevs_discovered": 2, 00:12:59.905 "num_base_bdevs_operational": 2, 00:12:59.905 "process": { 00:12:59.905 "type": "rebuild", 00:12:59.905 "target": "spare", 00:12:59.905 "progress": { 00:12:59.905 "blocks": 45056, 00:12:59.905 "percent": 68 00:12:59.905 } 00:12:59.905 }, 00:12:59.905 "base_bdevs_list": [ 00:12:59.905 { 00:12:59.905 "name": "spare", 00:12:59.905 "uuid": "ab18728d-c575-5b3e-8f75-e341c4202668", 00:12:59.905 "is_configured": true, 00:12:59.905 "data_offset": 0, 00:12:59.905 "data_size": 65536 00:12:59.905 }, 00:12:59.905 { 00:12:59.905 "name": "BaseBdev2", 00:12:59.905 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:12:59.905 "is_configured": true, 00:12:59.905 "data_offset": 0, 00:12:59.905 "data_size": 65536 00:12:59.905 } 00:12:59.905 ] 00:12:59.905 }' 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.905 01:33:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.843 [2024-11-17 01:33:09.051688] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:00.843 [2024-11-17 01:33:09.051806] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:00.843 [2024-11-17 01:33:09.051868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.843 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.104 "name": "raid_bdev1", 00:13:01.104 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:13:01.104 "strip_size_kb": 0, 00:13:01.104 "state": "online", 00:13:01.104 "raid_level": "raid1", 00:13:01.104 "superblock": false, 00:13:01.104 "num_base_bdevs": 2, 00:13:01.104 "num_base_bdevs_discovered": 2, 00:13:01.104 "num_base_bdevs_operational": 2, 00:13:01.104 "base_bdevs_list": [ 00:13:01.104 { 00:13:01.104 "name": "spare", 00:13:01.104 "uuid": "ab18728d-c575-5b3e-8f75-e341c4202668", 00:13:01.104 "is_configured": true, 00:13:01.104 "data_offset": 0, 00:13:01.104 "data_size": 65536 00:13:01.104 }, 00:13:01.104 { 00:13:01.104 "name": "BaseBdev2", 00:13:01.104 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:13:01.104 "is_configured": true, 00:13:01.104 "data_offset": 0, 00:13:01.104 "data_size": 65536 00:13:01.104 } 00:13:01.104 ] 00:13:01.104 }' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.104 "name": "raid_bdev1", 00:13:01.104 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:13:01.104 "strip_size_kb": 0, 00:13:01.104 "state": "online", 00:13:01.104 "raid_level": "raid1", 00:13:01.104 "superblock": false, 00:13:01.104 "num_base_bdevs": 2, 00:13:01.104 "num_base_bdevs_discovered": 2, 00:13:01.104 "num_base_bdevs_operational": 2, 00:13:01.104 "base_bdevs_list": [ 00:13:01.104 { 00:13:01.104 "name": "spare", 00:13:01.104 "uuid": "ab18728d-c575-5b3e-8f75-e341c4202668", 00:13:01.104 "is_configured": true, 00:13:01.104 "data_offset": 0, 00:13:01.104 "data_size": 65536 00:13:01.104 }, 00:13:01.104 { 00:13:01.104 "name": "BaseBdev2", 00:13:01.104 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:13:01.104 "is_configured": true, 00:13:01.104 "data_offset": 0, 00:13:01.104 "data_size": 65536 00:13:01.104 } 00:13:01.104 ] 00:13:01.104 }' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.104 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.365 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.365 "name": "raid_bdev1", 00:13:01.365 "uuid": "a513a930-6a81-4c46-a7f3-5089edae671e", 00:13:01.365 "strip_size_kb": 0, 00:13:01.365 "state": "online", 00:13:01.365 "raid_level": "raid1", 00:13:01.365 "superblock": false, 00:13:01.365 "num_base_bdevs": 2, 00:13:01.365 "num_base_bdevs_discovered": 2, 00:13:01.365 "num_base_bdevs_operational": 2, 00:13:01.365 "base_bdevs_list": [ 00:13:01.365 { 00:13:01.365 "name": "spare", 00:13:01.365 "uuid": "ab18728d-c575-5b3e-8f75-e341c4202668", 00:13:01.365 "is_configured": true, 00:13:01.365 "data_offset": 0, 00:13:01.365 "data_size": 65536 00:13:01.365 }, 00:13:01.365 { 00:13:01.365 "name": "BaseBdev2", 00:13:01.365 "uuid": "f877d7e9-c077-588a-80e1-ed0cf305e44f", 00:13:01.365 "is_configured": true, 00:13:01.365 "data_offset": 0, 00:13:01.365 "data_size": 65536 00:13:01.365 } 00:13:01.365 ] 00:13:01.365 }' 00:13:01.365 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.365 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.625 [2024-11-17 01:33:09.937794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.625 [2024-11-17 01:33:09.937837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.625 [2024-11-17 01:33:09.937969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.625 [2024-11-17 01:33:09.938050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.625 [2024-11-17 01:33:09.938062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:01.625 01:33:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:01.885 /dev/nbd0 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.885 1+0 records in 00:13:01.885 1+0 records out 00:13:01.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301183 s, 13.6 MB/s 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:01.885 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:01.886 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.886 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:01.886 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:02.146 /dev/nbd1 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.146 1+0 records in 00:13:02.146 1+0 records out 00:13:02.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445303 s, 9.2 MB/s 00:13:02.146 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.147 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:02.147 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.147 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.147 01:33:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:02.147 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.147 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:02.147 01:33:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:02.407 01:33:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:02.407 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.407 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:02.407 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.407 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:02.407 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.407 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.666 01:33:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:02.926 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:02.926 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:02.926 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75069 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75069 ']' 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75069 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75069 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.927 killing process with pid 75069 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75069' 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75069 00:13:02.927 Received shutdown signal, test time was about 60.000000 seconds 00:13:02.927 00:13:02.927 Latency(us) 00:13:02.927 [2024-11-17T01:33:11.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.927 [2024-11-17T01:33:11.387Z] =================================================================================================================== 00:13:02.927 [2024-11-17T01:33:11.387Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:02.927 [2024-11-17 01:33:11.226820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.927 01:33:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75069 00:13:03.186 [2024-11-17 01:33:11.514911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.126 01:33:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:04.126 00:13:04.126 real 0m14.945s 00:13:04.126 user 0m16.989s 00:13:04.126 sys 0m2.949s 00:13:04.126 01:33:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.126 01:33:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.126 ************************************ 00:13:04.126 END TEST raid_rebuild_test 00:13:04.126 ************************************ 00:13:04.387 01:33:12 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:04.387 01:33:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:04.387 01:33:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.387 01:33:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:04.387 ************************************ 00:13:04.387 START TEST raid_rebuild_test_sb 00:13:04.387 ************************************ 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75486 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75486 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75486 ']' 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.387 01:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.387 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:04.387 Zero copy mechanism will not be used. 00:13:04.387 [2024-11-17 01:33:12.740636] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:04.387 [2024-11-17 01:33:12.740767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75486 ] 00:13:04.647 [2024-11-17 01:33:12.912378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.647 [2024-11-17 01:33:13.029115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.907 [2024-11-17 01:33:13.225923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.907 [2024-11-17 01:33:13.225996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.167 BaseBdev1_malloc 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.167 [2024-11-17 01:33:13.611034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:05.167 [2024-11-17 01:33:13.611137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.167 [2024-11-17 01:33:13.611177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:05.167 [2024-11-17 01:33:13.611194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.167 [2024-11-17 01:33:13.613247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.167 [2024-11-17 01:33:13.613290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:05.167 BaseBdev1 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.167 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 BaseBdev2_malloc 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 [2024-11-17 01:33:13.663910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:05.428 [2024-11-17 01:33:13.663992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.428 [2024-11-17 01:33:13.664012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:05.428 [2024-11-17 01:33:13.664027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.428 [2024-11-17 01:33:13.666022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.428 [2024-11-17 01:33:13.666060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.428 BaseBdev2 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 spare_malloc 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 spare_delay 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 [2024-11-17 01:33:13.741755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:05.428 [2024-11-17 01:33:13.741850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.428 [2024-11-17 01:33:13.741871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:05.428 [2024-11-17 01:33:13.741884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.428 [2024-11-17 01:33:13.743925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.428 [2024-11-17 01:33:13.743969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:05.428 spare 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 [2024-11-17 01:33:13.753810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.428 [2024-11-17 01:33:13.755554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.428 [2024-11-17 01:33:13.755739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:05.428 [2024-11-17 01:33:13.755771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:05.428 [2024-11-17 01:33:13.756012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:05.428 [2024-11-17 01:33:13.756186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:05.428 [2024-11-17 01:33:13.756205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:05.428 [2024-11-17 01:33:13.756352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.428 "name": "raid_bdev1", 00:13:05.428 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:05.428 "strip_size_kb": 0, 00:13:05.428 "state": "online", 00:13:05.428 "raid_level": "raid1", 00:13:05.428 "superblock": true, 00:13:05.428 "num_base_bdevs": 2, 00:13:05.428 "num_base_bdevs_discovered": 2, 00:13:05.428 "num_base_bdevs_operational": 2, 00:13:05.428 "base_bdevs_list": [ 00:13:05.428 { 00:13:05.428 "name": "BaseBdev1", 00:13:05.428 "uuid": "ef61e656-e46f-5b63-b915-0517fd77803a", 00:13:05.428 "is_configured": true, 00:13:05.428 "data_offset": 2048, 00:13:05.428 "data_size": 63488 00:13:05.428 }, 00:13:05.428 { 00:13:05.428 "name": "BaseBdev2", 00:13:05.428 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:05.428 "is_configured": true, 00:13:05.428 "data_offset": 2048, 00:13:05.428 "data_size": 63488 00:13:05.428 } 00:13:05.428 ] 00:13:05.428 }' 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.428 01:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:05.999 [2024-11-17 01:33:14.157457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:05.999 [2024-11-17 01:33:14.412734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:05.999 /dev/nbd0 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.999 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.259 1+0 records in 00:13:06.259 1+0 records out 00:13:06.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434246 s, 9.4 MB/s 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.259 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:06.260 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:06.260 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:06.260 01:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:10.468 63488+0 records in 00:13:10.468 63488+0 records out 00:13:10.468 32505856 bytes (33 MB, 31 MiB) copied, 4.06325 s, 8.0 MB/s 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.468 [2024-11-17 01:33:18.751402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:10.468 01:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.469 [2024-11-17 01:33:18.783455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.469 "name": "raid_bdev1", 00:13:10.469 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:10.469 "strip_size_kb": 0, 00:13:10.469 "state": "online", 00:13:10.469 "raid_level": "raid1", 00:13:10.469 "superblock": true, 00:13:10.469 "num_base_bdevs": 2, 00:13:10.469 "num_base_bdevs_discovered": 1, 00:13:10.469 "num_base_bdevs_operational": 1, 00:13:10.469 "base_bdevs_list": [ 00:13:10.469 { 00:13:10.469 "name": null, 00:13:10.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.469 "is_configured": false, 00:13:10.469 "data_offset": 0, 00:13:10.469 "data_size": 63488 00:13:10.469 }, 00:13:10.469 { 00:13:10.469 "name": "BaseBdev2", 00:13:10.469 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:10.469 "is_configured": true, 00:13:10.469 "data_offset": 2048, 00:13:10.469 "data_size": 63488 00:13:10.469 } 00:13:10.469 ] 00:13:10.469 }' 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.469 01:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.037 01:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.038 01:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.038 01:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.038 [2024-11-17 01:33:19.238881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.038 [2024-11-17 01:33:19.256958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:11.038 01:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.038 01:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:11.038 [2024-11-17 01:33:19.258996] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.976 "name": "raid_bdev1", 00:13:11.976 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:11.976 "strip_size_kb": 0, 00:13:11.976 "state": "online", 00:13:11.976 "raid_level": "raid1", 00:13:11.976 "superblock": true, 00:13:11.976 "num_base_bdevs": 2, 00:13:11.976 "num_base_bdevs_discovered": 2, 00:13:11.976 "num_base_bdevs_operational": 2, 00:13:11.976 "process": { 00:13:11.976 "type": "rebuild", 00:13:11.976 "target": "spare", 00:13:11.976 "progress": { 00:13:11.976 "blocks": 20480, 00:13:11.976 "percent": 32 00:13:11.976 } 00:13:11.976 }, 00:13:11.976 "base_bdevs_list": [ 00:13:11.976 { 00:13:11.976 "name": "spare", 00:13:11.976 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:11.976 "is_configured": true, 00:13:11.976 "data_offset": 2048, 00:13:11.976 "data_size": 63488 00:13:11.976 }, 00:13:11.976 { 00:13:11.976 "name": "BaseBdev2", 00:13:11.976 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:11.976 "is_configured": true, 00:13:11.976 "data_offset": 2048, 00:13:11.976 "data_size": 63488 00:13:11.976 } 00:13:11.976 ] 00:13:11.976 }' 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.976 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.976 [2024-11-17 01:33:20.418811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.236 [2024-11-17 01:33:20.468427] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:12.236 [2024-11-17 01:33:20.468495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.236 [2024-11-17 01:33:20.468510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.236 [2024-11-17 01:33:20.468521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.236 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.236 "name": "raid_bdev1", 00:13:12.236 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:12.236 "strip_size_kb": 0, 00:13:12.236 "state": "online", 00:13:12.236 "raid_level": "raid1", 00:13:12.236 "superblock": true, 00:13:12.236 "num_base_bdevs": 2, 00:13:12.236 "num_base_bdevs_discovered": 1, 00:13:12.236 "num_base_bdevs_operational": 1, 00:13:12.236 "base_bdevs_list": [ 00:13:12.236 { 00:13:12.236 "name": null, 00:13:12.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.237 "is_configured": false, 00:13:12.237 "data_offset": 0, 00:13:12.237 "data_size": 63488 00:13:12.237 }, 00:13:12.237 { 00:13:12.237 "name": "BaseBdev2", 00:13:12.237 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:12.237 "is_configured": true, 00:13:12.237 "data_offset": 2048, 00:13:12.237 "data_size": 63488 00:13:12.237 } 00:13:12.237 ] 00:13:12.237 }' 00:13:12.237 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.237 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.806 01:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.806 "name": "raid_bdev1", 00:13:12.806 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:12.806 "strip_size_kb": 0, 00:13:12.806 "state": "online", 00:13:12.806 "raid_level": "raid1", 00:13:12.806 "superblock": true, 00:13:12.806 "num_base_bdevs": 2, 00:13:12.806 "num_base_bdevs_discovered": 1, 00:13:12.806 "num_base_bdevs_operational": 1, 00:13:12.806 "base_bdevs_list": [ 00:13:12.806 { 00:13:12.806 "name": null, 00:13:12.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.806 "is_configured": false, 00:13:12.806 "data_offset": 0, 00:13:12.806 "data_size": 63488 00:13:12.806 }, 00:13:12.806 { 00:13:12.806 "name": "BaseBdev2", 00:13:12.806 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:12.806 "is_configured": true, 00:13:12.806 "data_offset": 2048, 00:13:12.806 "data_size": 63488 00:13:12.806 } 00:13:12.806 ] 00:13:12.806 }' 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.806 [2024-11-17 01:33:21.124199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.806 [2024-11-17 01:33:21.141705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.806 01:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:12.806 [2024-11-17 01:33:21.143786] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.764 "name": "raid_bdev1", 00:13:13.764 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:13.764 "strip_size_kb": 0, 00:13:13.764 "state": "online", 00:13:13.764 "raid_level": "raid1", 00:13:13.764 "superblock": true, 00:13:13.764 "num_base_bdevs": 2, 00:13:13.764 "num_base_bdevs_discovered": 2, 00:13:13.764 "num_base_bdevs_operational": 2, 00:13:13.764 "process": { 00:13:13.764 "type": "rebuild", 00:13:13.764 "target": "spare", 00:13:13.764 "progress": { 00:13:13.764 "blocks": 20480, 00:13:13.764 "percent": 32 00:13:13.764 } 00:13:13.764 }, 00:13:13.764 "base_bdevs_list": [ 00:13:13.764 { 00:13:13.764 "name": "spare", 00:13:13.764 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:13.764 "is_configured": true, 00:13:13.764 "data_offset": 2048, 00:13:13.764 "data_size": 63488 00:13:13.764 }, 00:13:13.764 { 00:13:13.764 "name": "BaseBdev2", 00:13:13.764 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:13.764 "is_configured": true, 00:13:13.764 "data_offset": 2048, 00:13:13.764 "data_size": 63488 00:13:13.764 } 00:13:13.764 ] 00:13:13.764 }' 00:13:13.764 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:14.024 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.024 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.025 "name": "raid_bdev1", 00:13:14.025 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:14.025 "strip_size_kb": 0, 00:13:14.025 "state": "online", 00:13:14.025 "raid_level": "raid1", 00:13:14.025 "superblock": true, 00:13:14.025 "num_base_bdevs": 2, 00:13:14.025 "num_base_bdevs_discovered": 2, 00:13:14.025 "num_base_bdevs_operational": 2, 00:13:14.025 "process": { 00:13:14.025 "type": "rebuild", 00:13:14.025 "target": "spare", 00:13:14.025 "progress": { 00:13:14.025 "blocks": 22528, 00:13:14.025 "percent": 35 00:13:14.025 } 00:13:14.025 }, 00:13:14.025 "base_bdevs_list": [ 00:13:14.025 { 00:13:14.025 "name": "spare", 00:13:14.025 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:14.025 "is_configured": true, 00:13:14.025 "data_offset": 2048, 00:13:14.025 "data_size": 63488 00:13:14.025 }, 00:13:14.025 { 00:13:14.025 "name": "BaseBdev2", 00:13:14.025 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:14.025 "is_configured": true, 00:13:14.025 "data_offset": 2048, 00:13:14.025 "data_size": 63488 00:13:14.025 } 00:13:14.025 ] 00:13:14.025 }' 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.025 01:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.406 "name": "raid_bdev1", 00:13:15.406 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:15.406 "strip_size_kb": 0, 00:13:15.406 "state": "online", 00:13:15.406 "raid_level": "raid1", 00:13:15.406 "superblock": true, 00:13:15.406 "num_base_bdevs": 2, 00:13:15.406 "num_base_bdevs_discovered": 2, 00:13:15.406 "num_base_bdevs_operational": 2, 00:13:15.406 "process": { 00:13:15.406 "type": "rebuild", 00:13:15.406 "target": "spare", 00:13:15.406 "progress": { 00:13:15.406 "blocks": 45056, 00:13:15.406 "percent": 70 00:13:15.406 } 00:13:15.406 }, 00:13:15.406 "base_bdevs_list": [ 00:13:15.406 { 00:13:15.406 "name": "spare", 00:13:15.406 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:15.406 "is_configured": true, 00:13:15.406 "data_offset": 2048, 00:13:15.406 "data_size": 63488 00:13:15.406 }, 00:13:15.406 { 00:13:15.406 "name": "BaseBdev2", 00:13:15.406 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:15.406 "is_configured": true, 00:13:15.406 "data_offset": 2048, 00:13:15.406 "data_size": 63488 00:13:15.406 } 00:13:15.406 ] 00:13:15.406 }' 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.406 01:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.977 [2024-11-17 01:33:24.266614] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:15.977 [2024-11-17 01:33:24.266709] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:15.977 [2024-11-17 01:33:24.266828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.237 "name": "raid_bdev1", 00:13:16.237 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:16.237 "strip_size_kb": 0, 00:13:16.237 "state": "online", 00:13:16.237 "raid_level": "raid1", 00:13:16.237 "superblock": true, 00:13:16.237 "num_base_bdevs": 2, 00:13:16.237 "num_base_bdevs_discovered": 2, 00:13:16.237 "num_base_bdevs_operational": 2, 00:13:16.237 "base_bdevs_list": [ 00:13:16.237 { 00:13:16.237 "name": "spare", 00:13:16.237 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:16.237 "is_configured": true, 00:13:16.237 "data_offset": 2048, 00:13:16.237 "data_size": 63488 00:13:16.237 }, 00:13:16.237 { 00:13:16.237 "name": "BaseBdev2", 00:13:16.237 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:16.237 "is_configured": true, 00:13:16.237 "data_offset": 2048, 00:13:16.237 "data_size": 63488 00:13:16.237 } 00:13:16.237 ] 00:13:16.237 }' 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.237 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.497 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.497 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.497 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.497 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.497 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.497 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.498 "name": "raid_bdev1", 00:13:16.498 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:16.498 "strip_size_kb": 0, 00:13:16.498 "state": "online", 00:13:16.498 "raid_level": "raid1", 00:13:16.498 "superblock": true, 00:13:16.498 "num_base_bdevs": 2, 00:13:16.498 "num_base_bdevs_discovered": 2, 00:13:16.498 "num_base_bdevs_operational": 2, 00:13:16.498 "base_bdevs_list": [ 00:13:16.498 { 00:13:16.498 "name": "spare", 00:13:16.498 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:16.498 "is_configured": true, 00:13:16.498 "data_offset": 2048, 00:13:16.498 "data_size": 63488 00:13:16.498 }, 00:13:16.498 { 00:13:16.498 "name": "BaseBdev2", 00:13:16.498 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:16.498 "is_configured": true, 00:13:16.498 "data_offset": 2048, 00:13:16.498 "data_size": 63488 00:13:16.498 } 00:13:16.498 ] 00:13:16.498 }' 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.498 "name": "raid_bdev1", 00:13:16.498 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:16.498 "strip_size_kb": 0, 00:13:16.498 "state": "online", 00:13:16.498 "raid_level": "raid1", 00:13:16.498 "superblock": true, 00:13:16.498 "num_base_bdevs": 2, 00:13:16.498 "num_base_bdevs_discovered": 2, 00:13:16.498 "num_base_bdevs_operational": 2, 00:13:16.498 "base_bdevs_list": [ 00:13:16.498 { 00:13:16.498 "name": "spare", 00:13:16.498 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:16.498 "is_configured": true, 00:13:16.498 "data_offset": 2048, 00:13:16.498 "data_size": 63488 00:13:16.498 }, 00:13:16.498 { 00:13:16.498 "name": "BaseBdev2", 00:13:16.498 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:16.498 "is_configured": true, 00:13:16.498 "data_offset": 2048, 00:13:16.498 "data_size": 63488 00:13:16.498 } 00:13:16.498 ] 00:13:16.498 }' 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.498 01:33:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.758 [2024-11-17 01:33:25.199274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.758 [2024-11-17 01:33:25.199326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.758 [2024-11-17 01:33:25.199423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.758 [2024-11-17 01:33:25.199499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.758 [2024-11-17 01:33:25.199514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.758 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:17.019 /dev/nbd0 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.019 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.278 1+0 records in 00:13:17.278 1+0 records out 00:13:17.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400887 s, 10.2 MB/s 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:17.278 /dev/nbd1 00:13:17.278 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.279 1+0 records in 00:13:17.279 1+0 records out 00:13:17.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232549 s, 17.6 MB/s 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:17.279 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.539 01:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.799 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.059 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.059 [2024-11-17 01:33:26.337393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:18.059 [2024-11-17 01:33:26.337464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.060 [2024-11-17 01:33:26.337488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:18.060 [2024-11-17 01:33:26.337499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.060 [2024-11-17 01:33:26.340664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.060 [2024-11-17 01:33:26.340795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:18.060 [2024-11-17 01:33:26.341086] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:18.060 [2024-11-17 01:33:26.341246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.060 [2024-11-17 01:33:26.341703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.060 spare 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.060 [2024-11-17 01:33:26.441900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:18.060 [2024-11-17 01:33:26.441969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:18.060 [2024-11-17 01:33:26.442346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:18.060 [2024-11-17 01:33:26.442587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:18.060 [2024-11-17 01:33:26.442607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:18.060 [2024-11-17 01:33:26.442820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.060 "name": "raid_bdev1", 00:13:18.060 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:18.060 "strip_size_kb": 0, 00:13:18.060 "state": "online", 00:13:18.060 "raid_level": "raid1", 00:13:18.060 "superblock": true, 00:13:18.060 "num_base_bdevs": 2, 00:13:18.060 "num_base_bdevs_discovered": 2, 00:13:18.060 "num_base_bdevs_operational": 2, 00:13:18.060 "base_bdevs_list": [ 00:13:18.060 { 00:13:18.060 "name": "spare", 00:13:18.060 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:18.060 "is_configured": true, 00:13:18.060 "data_offset": 2048, 00:13:18.060 "data_size": 63488 00:13:18.060 }, 00:13:18.060 { 00:13:18.060 "name": "BaseBdev2", 00:13:18.060 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:18.060 "is_configured": true, 00:13:18.060 "data_offset": 2048, 00:13:18.060 "data_size": 63488 00:13:18.060 } 00:13:18.060 ] 00:13:18.060 }' 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.060 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.631 "name": "raid_bdev1", 00:13:18.631 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:18.631 "strip_size_kb": 0, 00:13:18.631 "state": "online", 00:13:18.631 "raid_level": "raid1", 00:13:18.631 "superblock": true, 00:13:18.631 "num_base_bdevs": 2, 00:13:18.631 "num_base_bdevs_discovered": 2, 00:13:18.631 "num_base_bdevs_operational": 2, 00:13:18.631 "base_bdevs_list": [ 00:13:18.631 { 00:13:18.631 "name": "spare", 00:13:18.631 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:18.631 "is_configured": true, 00:13:18.631 "data_offset": 2048, 00:13:18.631 "data_size": 63488 00:13:18.631 }, 00:13:18.631 { 00:13:18.631 "name": "BaseBdev2", 00:13:18.631 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:18.631 "is_configured": true, 00:13:18.631 "data_offset": 2048, 00:13:18.631 "data_size": 63488 00:13:18.631 } 00:13:18.631 ] 00:13:18.631 }' 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.631 01:33:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.631 [2024-11-17 01:33:27.032572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.631 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.631 "name": "raid_bdev1", 00:13:18.631 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:18.631 "strip_size_kb": 0, 00:13:18.631 "state": "online", 00:13:18.631 "raid_level": "raid1", 00:13:18.631 "superblock": true, 00:13:18.631 "num_base_bdevs": 2, 00:13:18.631 "num_base_bdevs_discovered": 1, 00:13:18.631 "num_base_bdevs_operational": 1, 00:13:18.631 "base_bdevs_list": [ 00:13:18.631 { 00:13:18.631 "name": null, 00:13:18.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.631 "is_configured": false, 00:13:18.631 "data_offset": 0, 00:13:18.632 "data_size": 63488 00:13:18.632 }, 00:13:18.632 { 00:13:18.632 "name": "BaseBdev2", 00:13:18.632 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:18.632 "is_configured": true, 00:13:18.632 "data_offset": 2048, 00:13:18.632 "data_size": 63488 00:13:18.632 } 00:13:18.632 ] 00:13:18.632 }' 00:13:18.632 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.632 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.202 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.202 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.202 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.202 [2024-11-17 01:33:27.419993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.202 [2024-11-17 01:33:27.420202] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:19.202 [2024-11-17 01:33:27.420227] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:19.203 [2024-11-17 01:33:27.420266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.203 [2024-11-17 01:33:27.435940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:19.203 01:33:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.203 01:33:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:19.203 [2024-11-17 01:33:27.438405] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.142 "name": "raid_bdev1", 00:13:20.142 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:20.142 "strip_size_kb": 0, 00:13:20.142 "state": "online", 00:13:20.142 "raid_level": "raid1", 00:13:20.142 "superblock": true, 00:13:20.142 "num_base_bdevs": 2, 00:13:20.142 "num_base_bdevs_discovered": 2, 00:13:20.142 "num_base_bdevs_operational": 2, 00:13:20.142 "process": { 00:13:20.142 "type": "rebuild", 00:13:20.142 "target": "spare", 00:13:20.142 "progress": { 00:13:20.142 "blocks": 20480, 00:13:20.142 "percent": 32 00:13:20.142 } 00:13:20.142 }, 00:13:20.142 "base_bdevs_list": [ 00:13:20.142 { 00:13:20.142 "name": "spare", 00:13:20.142 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:20.142 "is_configured": true, 00:13:20.142 "data_offset": 2048, 00:13:20.142 "data_size": 63488 00:13:20.142 }, 00:13:20.142 { 00:13:20.142 "name": "BaseBdev2", 00:13:20.142 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:20.142 "is_configured": true, 00:13:20.142 "data_offset": 2048, 00:13:20.142 "data_size": 63488 00:13:20.142 } 00:13:20.142 ] 00:13:20.142 }' 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.142 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.142 [2024-11-17 01:33:28.593627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.402 [2024-11-17 01:33:28.646278] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.402 [2024-11-17 01:33:28.646341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.402 [2024-11-17 01:33:28.646356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.402 [2024-11-17 01:33:28.646366] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.402 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.403 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.403 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.403 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.403 "name": "raid_bdev1", 00:13:20.403 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:20.403 "strip_size_kb": 0, 00:13:20.403 "state": "online", 00:13:20.403 "raid_level": "raid1", 00:13:20.403 "superblock": true, 00:13:20.403 "num_base_bdevs": 2, 00:13:20.403 "num_base_bdevs_discovered": 1, 00:13:20.403 "num_base_bdevs_operational": 1, 00:13:20.403 "base_bdevs_list": [ 00:13:20.403 { 00:13:20.403 "name": null, 00:13:20.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.403 "is_configured": false, 00:13:20.403 "data_offset": 0, 00:13:20.403 "data_size": 63488 00:13:20.403 }, 00:13:20.403 { 00:13:20.403 "name": "BaseBdev2", 00:13:20.403 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:20.403 "is_configured": true, 00:13:20.403 "data_offset": 2048, 00:13:20.403 "data_size": 63488 00:13:20.403 } 00:13:20.403 ] 00:13:20.403 }' 00:13:20.403 01:33:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.403 01:33:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.662 01:33:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.662 01:33:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.662 01:33:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.662 [2024-11-17 01:33:29.081342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.662 [2024-11-17 01:33:29.081410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.662 [2024-11-17 01:33:29.081431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:20.662 [2024-11-17 01:33:29.081445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.662 [2024-11-17 01:33:29.081970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.662 [2024-11-17 01:33:29.082000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.662 [2024-11-17 01:33:29.082083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:20.662 [2024-11-17 01:33:29.082104] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.662 [2024-11-17 01:33:29.082113] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:20.662 [2024-11-17 01:33:29.082141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.662 [2024-11-17 01:33:29.096753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:20.662 spare 00:13:20.662 01:33:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.662 01:33:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:20.662 [2024-11-17 01:33:29.098799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.044 "name": "raid_bdev1", 00:13:22.044 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:22.044 "strip_size_kb": 0, 00:13:22.044 "state": "online", 00:13:22.044 "raid_level": "raid1", 00:13:22.044 "superblock": true, 00:13:22.044 "num_base_bdevs": 2, 00:13:22.044 "num_base_bdevs_discovered": 2, 00:13:22.044 "num_base_bdevs_operational": 2, 00:13:22.044 "process": { 00:13:22.044 "type": "rebuild", 00:13:22.044 "target": "spare", 00:13:22.044 "progress": { 00:13:22.044 "blocks": 20480, 00:13:22.044 "percent": 32 00:13:22.044 } 00:13:22.044 }, 00:13:22.044 "base_bdevs_list": [ 00:13:22.044 { 00:13:22.044 "name": "spare", 00:13:22.044 "uuid": "e8b6dcf1-cae2-5b11-810f-16fb3080b363", 00:13:22.044 "is_configured": true, 00:13:22.044 "data_offset": 2048, 00:13:22.044 "data_size": 63488 00:13:22.044 }, 00:13:22.044 { 00:13:22.044 "name": "BaseBdev2", 00:13:22.044 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:22.044 "is_configured": true, 00:13:22.044 "data_offset": 2048, 00:13:22.044 "data_size": 63488 00:13:22.044 } 00:13:22.044 ] 00:13:22.044 }' 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.044 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.045 [2024-11-17 01:33:30.239299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.045 [2024-11-17 01:33:30.306847] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.045 [2024-11-17 01:33:30.306904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.045 [2024-11-17 01:33:30.306921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.045 [2024-11-17 01:33:30.306929] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.045 "name": "raid_bdev1", 00:13:22.045 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:22.045 "strip_size_kb": 0, 00:13:22.045 "state": "online", 00:13:22.045 "raid_level": "raid1", 00:13:22.045 "superblock": true, 00:13:22.045 "num_base_bdevs": 2, 00:13:22.045 "num_base_bdevs_discovered": 1, 00:13:22.045 "num_base_bdevs_operational": 1, 00:13:22.045 "base_bdevs_list": [ 00:13:22.045 { 00:13:22.045 "name": null, 00:13:22.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.045 "is_configured": false, 00:13:22.045 "data_offset": 0, 00:13:22.045 "data_size": 63488 00:13:22.045 }, 00:13:22.045 { 00:13:22.045 "name": "BaseBdev2", 00:13:22.045 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:22.045 "is_configured": true, 00:13:22.045 "data_offset": 2048, 00:13:22.045 "data_size": 63488 00:13:22.045 } 00:13:22.045 ] 00:13:22.045 }' 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.045 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.305 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.565 "name": "raid_bdev1", 00:13:22.565 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:22.565 "strip_size_kb": 0, 00:13:22.565 "state": "online", 00:13:22.565 "raid_level": "raid1", 00:13:22.565 "superblock": true, 00:13:22.565 "num_base_bdevs": 2, 00:13:22.565 "num_base_bdevs_discovered": 1, 00:13:22.565 "num_base_bdevs_operational": 1, 00:13:22.565 "base_bdevs_list": [ 00:13:22.565 { 00:13:22.565 "name": null, 00:13:22.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.565 "is_configured": false, 00:13:22.565 "data_offset": 0, 00:13:22.565 "data_size": 63488 00:13:22.565 }, 00:13:22.565 { 00:13:22.565 "name": "BaseBdev2", 00:13:22.565 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:22.565 "is_configured": true, 00:13:22.565 "data_offset": 2048, 00:13:22.565 "data_size": 63488 00:13:22.565 } 00:13:22.565 ] 00:13:22.565 }' 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.565 [2024-11-17 01:33:30.897495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.565 [2024-11-17 01:33:30.897548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.565 [2024-11-17 01:33:30.897571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:22.565 [2024-11-17 01:33:30.897591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.565 [2024-11-17 01:33:30.898084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.565 [2024-11-17 01:33:30.898110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.565 [2024-11-17 01:33:30.898196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:22.565 [2024-11-17 01:33:30.898211] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.565 [2024-11-17 01:33:30.898221] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:22.565 [2024-11-17 01:33:30.898232] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:22.565 BaseBdev1 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.565 01:33:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.506 01:33:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.765 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.765 "name": "raid_bdev1", 00:13:23.765 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:23.765 "strip_size_kb": 0, 00:13:23.765 "state": "online", 00:13:23.765 "raid_level": "raid1", 00:13:23.765 "superblock": true, 00:13:23.765 "num_base_bdevs": 2, 00:13:23.765 "num_base_bdevs_discovered": 1, 00:13:23.765 "num_base_bdevs_operational": 1, 00:13:23.765 "base_bdevs_list": [ 00:13:23.765 { 00:13:23.765 "name": null, 00:13:23.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.765 "is_configured": false, 00:13:23.765 "data_offset": 0, 00:13:23.765 "data_size": 63488 00:13:23.765 }, 00:13:23.765 { 00:13:23.765 "name": "BaseBdev2", 00:13:23.765 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:23.765 "is_configured": true, 00:13:23.765 "data_offset": 2048, 00:13:23.765 "data_size": 63488 00:13:23.765 } 00:13:23.765 ] 00:13:23.765 }' 00:13:23.765 01:33:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.765 01:33:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.025 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.026 "name": "raid_bdev1", 00:13:24.026 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:24.026 "strip_size_kb": 0, 00:13:24.026 "state": "online", 00:13:24.026 "raid_level": "raid1", 00:13:24.026 "superblock": true, 00:13:24.026 "num_base_bdevs": 2, 00:13:24.026 "num_base_bdevs_discovered": 1, 00:13:24.026 "num_base_bdevs_operational": 1, 00:13:24.026 "base_bdevs_list": [ 00:13:24.026 { 00:13:24.026 "name": null, 00:13:24.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.026 "is_configured": false, 00:13:24.026 "data_offset": 0, 00:13:24.026 "data_size": 63488 00:13:24.026 }, 00:13:24.026 { 00:13:24.026 "name": "BaseBdev2", 00:13:24.026 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:24.026 "is_configured": true, 00:13:24.026 "data_offset": 2048, 00:13:24.026 "data_size": 63488 00:13:24.026 } 00:13:24.026 ] 00:13:24.026 }' 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.026 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.026 [2024-11-17 01:33:32.478891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.026 [2024-11-17 01:33:32.479006] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:24.026 [2024-11-17 01:33:32.479022] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:24.026 request: 00:13:24.026 { 00:13:24.026 "base_bdev": "BaseBdev1", 00:13:24.286 "raid_bdev": "raid_bdev1", 00:13:24.286 "method": "bdev_raid_add_base_bdev", 00:13:24.286 "req_id": 1 00:13:24.286 } 00:13:24.286 Got JSON-RPC error response 00:13:24.286 response: 00:13:24.286 { 00:13:24.286 "code": -22, 00:13:24.286 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:24.286 } 00:13:24.286 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:24.286 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:24.286 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.286 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.286 01:33:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.286 01:33:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.225 "name": "raid_bdev1", 00:13:25.225 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:25.225 "strip_size_kb": 0, 00:13:25.225 "state": "online", 00:13:25.225 "raid_level": "raid1", 00:13:25.225 "superblock": true, 00:13:25.225 "num_base_bdevs": 2, 00:13:25.225 "num_base_bdevs_discovered": 1, 00:13:25.225 "num_base_bdevs_operational": 1, 00:13:25.225 "base_bdevs_list": [ 00:13:25.225 { 00:13:25.225 "name": null, 00:13:25.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.225 "is_configured": false, 00:13:25.225 "data_offset": 0, 00:13:25.225 "data_size": 63488 00:13:25.225 }, 00:13:25.225 { 00:13:25.225 "name": "BaseBdev2", 00:13:25.225 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:25.225 "is_configured": true, 00:13:25.225 "data_offset": 2048, 00:13:25.225 "data_size": 63488 00:13:25.225 } 00:13:25.225 ] 00:13:25.225 }' 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.225 01:33:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.791 01:33:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.792 01:33:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.792 01:33:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.792 "name": "raid_bdev1", 00:13:25.792 "uuid": "104690c9-6e9a-4683-8bd0-7231f0a69683", 00:13:25.792 "strip_size_kb": 0, 00:13:25.792 "state": "online", 00:13:25.792 "raid_level": "raid1", 00:13:25.792 "superblock": true, 00:13:25.792 "num_base_bdevs": 2, 00:13:25.792 "num_base_bdevs_discovered": 1, 00:13:25.792 "num_base_bdevs_operational": 1, 00:13:25.792 "base_bdevs_list": [ 00:13:25.792 { 00:13:25.792 "name": null, 00:13:25.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.792 "is_configured": false, 00:13:25.792 "data_offset": 0, 00:13:25.792 "data_size": 63488 00:13:25.792 }, 00:13:25.792 { 00:13:25.792 "name": "BaseBdev2", 00:13:25.792 "uuid": "bb842508-9241-5934-8236-8f76740a3529", 00:13:25.792 "is_configured": true, 00:13:25.792 "data_offset": 2048, 00:13:25.792 "data_size": 63488 00:13:25.792 } 00:13:25.792 ] 00:13:25.792 }' 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75486 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75486 ']' 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75486 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75486 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.792 killing process with pid 75486 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75486' 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75486 00:13:25.792 Received shutdown signal, test time was about 60.000000 seconds 00:13:25.792 00:13:25.792 Latency(us) 00:13:25.792 [2024-11-17T01:33:34.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.792 [2024-11-17T01:33:34.252Z] =================================================================================================================== 00:13:25.792 [2024-11-17T01:33:34.252Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:25.792 [2024-11-17 01:33:34.117720] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.792 [2024-11-17 01:33:34.117834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.792 [2024-11-17 01:33:34.117875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.792 [2024-11-17 01:33:34.117887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:25.792 01:33:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75486 00:13:26.051 [2024-11-17 01:33:34.425688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:27.431 00:13:27.431 real 0m22.910s 00:13:27.431 user 0m27.635s 00:13:27.431 sys 0m3.712s 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.431 ************************************ 00:13:27.431 END TEST raid_rebuild_test_sb 00:13:27.431 ************************************ 00:13:27.431 01:33:35 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:27.431 01:33:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:27.431 01:33:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.431 01:33:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.431 ************************************ 00:13:27.431 START TEST raid_rebuild_test_io 00:13:27.431 ************************************ 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76210 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76210 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76210 ']' 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.431 01:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.431 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:27.431 Zero copy mechanism will not be used. 00:13:27.431 [2024-11-17 01:33:35.726613] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:27.431 [2024-11-17 01:33:35.726719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76210 ] 00:13:27.691 [2024-11-17 01:33:35.899346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.691 [2024-11-17 01:33:36.027015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.950 [2024-11-17 01:33:36.261953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.950 [2024-11-17 01:33:36.262023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 BaseBdev1_malloc 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 [2024-11-17 01:33:36.578451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.210 [2024-11-17 01:33:36.578534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.210 [2024-11-17 01:33:36.578559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:28.210 [2024-11-17 01:33:36.578572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.210 [2024-11-17 01:33:36.580987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.210 [2024-11-17 01:33:36.581024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.210 BaseBdev1 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 BaseBdev2_malloc 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.210 [2024-11-17 01:33:36.635539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:28.210 [2024-11-17 01:33:36.635601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.210 [2024-11-17 01:33:36.635621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:28.210 [2024-11-17 01:33:36.635632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.210 [2024-11-17 01:33:36.637849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.210 [2024-11-17 01:33:36.637881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.210 BaseBdev2 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.210 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 spare_malloc 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 spare_delay 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 [2024-11-17 01:33:36.728931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.471 [2024-11-17 01:33:36.728987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.471 [2024-11-17 01:33:36.729005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:28.471 [2024-11-17 01:33:36.729016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.471 [2024-11-17 01:33:36.731265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.471 [2024-11-17 01:33:36.731301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.471 spare 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 [2024-11-17 01:33:36.740965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.471 [2024-11-17 01:33:36.742909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.471 [2024-11-17 01:33:36.742995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:28.471 [2024-11-17 01:33:36.743008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:28.471 [2024-11-17 01:33:36.743267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:28.471 [2024-11-17 01:33:36.743437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:28.471 [2024-11-17 01:33:36.743456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:28.471 [2024-11-17 01:33:36.743606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.471 "name": "raid_bdev1", 00:13:28.471 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:28.471 "strip_size_kb": 0, 00:13:28.471 "state": "online", 00:13:28.471 "raid_level": "raid1", 00:13:28.471 "superblock": false, 00:13:28.471 "num_base_bdevs": 2, 00:13:28.471 "num_base_bdevs_discovered": 2, 00:13:28.471 "num_base_bdevs_operational": 2, 00:13:28.471 "base_bdevs_list": [ 00:13:28.471 { 00:13:28.471 "name": "BaseBdev1", 00:13:28.471 "uuid": "5108dd8d-4b71-571e-b28b-1728e92f691c", 00:13:28.471 "is_configured": true, 00:13:28.471 "data_offset": 0, 00:13:28.471 "data_size": 65536 00:13:28.471 }, 00:13:28.471 { 00:13:28.471 "name": "BaseBdev2", 00:13:28.471 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:28.471 "is_configured": true, 00:13:28.471 "data_offset": 0, 00:13:28.471 "data_size": 65536 00:13:28.471 } 00:13:28.471 ] 00:13:28.471 }' 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.471 01:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.731 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.731 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.731 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.731 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:28.731 [2024-11-17 01:33:37.168462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.731 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.991 [2024-11-17 01:33:37.248057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.991 "name": "raid_bdev1", 00:13:28.991 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:28.991 "strip_size_kb": 0, 00:13:28.991 "state": "online", 00:13:28.991 "raid_level": "raid1", 00:13:28.991 "superblock": false, 00:13:28.991 "num_base_bdevs": 2, 00:13:28.991 "num_base_bdevs_discovered": 1, 00:13:28.991 "num_base_bdevs_operational": 1, 00:13:28.991 "base_bdevs_list": [ 00:13:28.991 { 00:13:28.991 "name": null, 00:13:28.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.991 "is_configured": false, 00:13:28.991 "data_offset": 0, 00:13:28.991 "data_size": 65536 00:13:28.991 }, 00:13:28.991 { 00:13:28.991 "name": "BaseBdev2", 00:13:28.991 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:28.991 "is_configured": true, 00:13:28.991 "data_offset": 0, 00:13:28.991 "data_size": 65536 00:13:28.991 } 00:13:28.991 ] 00:13:28.991 }' 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.991 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.991 [2024-11-17 01:33:37.345216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:28.991 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.991 Zero copy mechanism will not be used. 00:13:28.991 Running I/O for 60 seconds... 00:13:29.251 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.251 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.251 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.251 [2024-11-17 01:33:37.665090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.251 01:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.251 01:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:29.512 [2024-11-17 01:33:37.722487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:29.512 [2024-11-17 01:33:37.724686] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.512 [2024-11-17 01:33:37.839980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.512 [2024-11-17 01:33:37.840666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.773 [2024-11-17 01:33:38.061848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.773 [2024-11-17 01:33:38.062366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:30.037 202.00 IOPS, 606.00 MiB/s [2024-11-17T01:33:38.497Z] [2024-11-17 01:33:38.398272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:30.037 [2024-11-17 01:33:38.399173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:30.298 [2024-11-17 01:33:38.606470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.298 [2024-11-17 01:33:38.606934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.298 01:33:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.558 "name": "raid_bdev1", 00:13:30.558 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:30.558 "strip_size_kb": 0, 00:13:30.558 "state": "online", 00:13:30.558 "raid_level": "raid1", 00:13:30.558 "superblock": false, 00:13:30.558 "num_base_bdevs": 2, 00:13:30.558 "num_base_bdevs_discovered": 2, 00:13:30.558 "num_base_bdevs_operational": 2, 00:13:30.558 "process": { 00:13:30.558 "type": "rebuild", 00:13:30.558 "target": "spare", 00:13:30.558 "progress": { 00:13:30.558 "blocks": 10240, 00:13:30.558 "percent": 15 00:13:30.558 } 00:13:30.558 }, 00:13:30.558 "base_bdevs_list": [ 00:13:30.558 { 00:13:30.558 "name": "spare", 00:13:30.558 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:30.558 "is_configured": true, 00:13:30.558 "data_offset": 0, 00:13:30.558 "data_size": 65536 00:13:30.558 }, 00:13:30.558 { 00:13:30.558 "name": "BaseBdev2", 00:13:30.558 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:30.558 "is_configured": true, 00:13:30.558 "data_offset": 0, 00:13:30.558 "data_size": 65536 00:13:30.558 } 00:13:30.558 ] 00:13:30.558 }' 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.558 01:33:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.558 [2024-11-17 01:33:38.867941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.558 [2024-11-17 01:33:38.940331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:30.818 [2024-11-17 01:33:39.041820] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:30.818 [2024-11-17 01:33:39.049784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.818 [2024-11-17 01:33:39.049920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.818 [2024-11-17 01:33:39.049937] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:30.818 [2024-11-17 01:33:39.094077] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.818 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.819 "name": "raid_bdev1", 00:13:30.819 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:30.819 "strip_size_kb": 0, 00:13:30.819 "state": "online", 00:13:30.819 "raid_level": "raid1", 00:13:30.819 "superblock": false, 00:13:30.819 "num_base_bdevs": 2, 00:13:30.819 "num_base_bdevs_discovered": 1, 00:13:30.819 "num_base_bdevs_operational": 1, 00:13:30.819 "base_bdevs_list": [ 00:13:30.819 { 00:13:30.819 "name": null, 00:13:30.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.819 "is_configured": false, 00:13:30.819 "data_offset": 0, 00:13:30.819 "data_size": 65536 00:13:30.819 }, 00:13:30.819 { 00:13:30.819 "name": "BaseBdev2", 00:13:30.819 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:30.819 "is_configured": true, 00:13:30.819 "data_offset": 0, 00:13:30.819 "data_size": 65536 00:13:30.819 } 00:13:30.819 ] 00:13:30.819 }' 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.819 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.337 178.50 IOPS, 535.50 MiB/s [2024-11-17T01:33:39.797Z] 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.337 "name": "raid_bdev1", 00:13:31.337 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:31.337 "strip_size_kb": 0, 00:13:31.337 "state": "online", 00:13:31.337 "raid_level": "raid1", 00:13:31.337 "superblock": false, 00:13:31.337 "num_base_bdevs": 2, 00:13:31.337 "num_base_bdevs_discovered": 1, 00:13:31.337 "num_base_bdevs_operational": 1, 00:13:31.337 "base_bdevs_list": [ 00:13:31.337 { 00:13:31.337 "name": null, 00:13:31.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.337 "is_configured": false, 00:13:31.337 "data_offset": 0, 00:13:31.337 "data_size": 65536 00:13:31.337 }, 00:13:31.337 { 00:13:31.337 "name": "BaseBdev2", 00:13:31.337 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:31.337 "is_configured": true, 00:13:31.337 "data_offset": 0, 00:13:31.337 "data_size": 65536 00:13:31.337 } 00:13:31.337 ] 00:13:31.337 }' 00:13:31.337 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.338 [2024-11-17 01:33:39.683198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.338 01:33:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:31.338 [2024-11-17 01:33:39.730137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:31.338 [2024-11-17 01:33:39.732299] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.596 [2024-11-17 01:33:39.839717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:31.596 [2024-11-17 01:33:39.840605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:31.596 [2024-11-17 01:33:39.966503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:31.596 [2024-11-17 01:33:39.966966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:32.165 [2024-11-17 01:33:40.317263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:32.165 [2024-11-17 01:33:40.318129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:32.165 181.00 IOPS, 543.00 MiB/s [2024-11-17T01:33:40.625Z] [2024-11-17 01:33:40.527821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.165 [2024-11-17 01:33:40.528292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.425 "name": "raid_bdev1", 00:13:32.425 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:32.425 "strip_size_kb": 0, 00:13:32.425 "state": "online", 00:13:32.425 "raid_level": "raid1", 00:13:32.425 "superblock": false, 00:13:32.425 "num_base_bdevs": 2, 00:13:32.425 "num_base_bdevs_discovered": 2, 00:13:32.425 "num_base_bdevs_operational": 2, 00:13:32.425 "process": { 00:13:32.425 "type": "rebuild", 00:13:32.425 "target": "spare", 00:13:32.425 "progress": { 00:13:32.425 "blocks": 12288, 00:13:32.425 "percent": 18 00:13:32.425 } 00:13:32.425 }, 00:13:32.425 "base_bdevs_list": [ 00:13:32.425 { 00:13:32.425 "name": "spare", 00:13:32.425 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:32.425 "is_configured": true, 00:13:32.425 "data_offset": 0, 00:13:32.425 "data_size": 65536 00:13:32.425 }, 00:13:32.425 { 00:13:32.425 "name": "BaseBdev2", 00:13:32.425 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:32.425 "is_configured": true, 00:13:32.425 "data_offset": 0, 00:13:32.425 "data_size": 65536 00:13:32.425 } 00:13:32.425 ] 00:13:32.425 }' 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.425 [2024-11-17 01:33:40.788490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=394 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.425 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.687 "name": "raid_bdev1", 00:13:32.687 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:32.687 "strip_size_kb": 0, 00:13:32.687 "state": "online", 00:13:32.687 "raid_level": "raid1", 00:13:32.687 "superblock": false, 00:13:32.687 "num_base_bdevs": 2, 00:13:32.687 "num_base_bdevs_discovered": 2, 00:13:32.687 "num_base_bdevs_operational": 2, 00:13:32.687 "process": { 00:13:32.687 "type": "rebuild", 00:13:32.687 "target": "spare", 00:13:32.687 "progress": { 00:13:32.687 "blocks": 14336, 00:13:32.687 "percent": 21 00:13:32.687 } 00:13:32.687 }, 00:13:32.687 "base_bdevs_list": [ 00:13:32.687 { 00:13:32.687 "name": "spare", 00:13:32.687 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:32.687 "is_configured": true, 00:13:32.687 "data_offset": 0, 00:13:32.687 "data_size": 65536 00:13:32.687 }, 00:13:32.687 { 00:13:32.687 "name": "BaseBdev2", 00:13:32.687 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:32.687 "is_configured": true, 00:13:32.687 "data_offset": 0, 00:13:32.687 "data_size": 65536 00:13:32.687 } 00:13:32.687 ] 00:13:32.687 }' 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.687 01:33:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.687 [2024-11-17 01:33:40.998707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:32.687 [2024-11-17 01:33:40.999066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:32.687 01:33:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.687 01:33:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.960 [2024-11-17 01:33:41.327727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:33.554 150.75 IOPS, 452.25 MiB/s [2024-11-17T01:33:42.014Z] [2024-11-17 01:33:41.772558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:33.554 [2024-11-17 01:33:41.773213] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:33.554 [2024-11-17 01:33:41.880812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:33.554 [2024-11-17 01:33:41.880970] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.814 "name": "raid_bdev1", 00:13:33.814 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:33.814 "strip_size_kb": 0, 00:13:33.814 "state": "online", 00:13:33.814 "raid_level": "raid1", 00:13:33.814 "superblock": false, 00:13:33.814 "num_base_bdevs": 2, 00:13:33.814 "num_base_bdevs_discovered": 2, 00:13:33.814 "num_base_bdevs_operational": 2, 00:13:33.814 "process": { 00:13:33.814 "type": "rebuild", 00:13:33.814 "target": "spare", 00:13:33.814 "progress": { 00:13:33.814 "blocks": 28672, 00:13:33.814 "percent": 43 00:13:33.814 } 00:13:33.814 }, 00:13:33.814 "base_bdevs_list": [ 00:13:33.814 { 00:13:33.814 "name": "spare", 00:13:33.814 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:33.814 "is_configured": true, 00:13:33.814 "data_offset": 0, 00:13:33.814 "data_size": 65536 00:13:33.814 }, 00:13:33.814 { 00:13:33.814 "name": "BaseBdev2", 00:13:33.814 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:33.814 "is_configured": true, 00:13:33.814 "data_offset": 0, 00:13:33.814 "data_size": 65536 00:13:33.814 } 00:13:33.814 ] 00:13:33.814 }' 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.814 01:33:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.814 [2024-11-17 01:33:42.230421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:34.073 131.60 IOPS, 394.80 MiB/s [2024-11-17T01:33:42.533Z] [2024-11-17 01:33:42.455838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:34.073 [2024-11-17 01:33:42.456187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:34.333 [2024-11-17 01:33:42.773617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:34.593 [2024-11-17 01:33:42.880185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.852 "name": "raid_bdev1", 00:13:34.852 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:34.852 "strip_size_kb": 0, 00:13:34.852 "state": "online", 00:13:34.852 "raid_level": "raid1", 00:13:34.852 "superblock": false, 00:13:34.852 "num_base_bdevs": 2, 00:13:34.852 "num_base_bdevs_discovered": 2, 00:13:34.852 "num_base_bdevs_operational": 2, 00:13:34.852 "process": { 00:13:34.852 "type": "rebuild", 00:13:34.852 "target": "spare", 00:13:34.852 "progress": { 00:13:34.852 "blocks": 45056, 00:13:34.852 "percent": 68 00:13:34.852 } 00:13:34.852 }, 00:13:34.852 "base_bdevs_list": [ 00:13:34.852 { 00:13:34.852 "name": "spare", 00:13:34.852 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:34.852 "is_configured": true, 00:13:34.852 "data_offset": 0, 00:13:34.852 "data_size": 65536 00:13:34.852 }, 00:13:34.852 { 00:13:34.852 "name": "BaseBdev2", 00:13:34.852 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:34.852 "is_configured": true, 00:13:34.852 "data_offset": 0, 00:13:34.852 "data_size": 65536 00:13:34.852 } 00:13:34.852 ] 00:13:34.852 }' 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.852 [2024-11-17 01:33:43.296186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:34.852 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.111 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.111 01:33:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.369 116.17 IOPS, 348.50 MiB/s [2024-11-17T01:33:43.829Z] [2024-11-17 01:33:43.702098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:35.937 105.86 IOPS, 317.57 MiB/s [2024-11-17T01:33:44.397Z] [2024-11-17 01:33:44.346997] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.937 01:33:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.196 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.196 "name": "raid_bdev1", 00:13:36.196 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:36.196 "strip_size_kb": 0, 00:13:36.196 "state": "online", 00:13:36.196 "raid_level": "raid1", 00:13:36.196 "superblock": false, 00:13:36.196 "num_base_bdevs": 2, 00:13:36.196 "num_base_bdevs_discovered": 2, 00:13:36.196 "num_base_bdevs_operational": 2, 00:13:36.196 "process": { 00:13:36.196 "type": "rebuild", 00:13:36.196 "target": "spare", 00:13:36.196 "progress": { 00:13:36.196 "blocks": 65536, 00:13:36.196 "percent": 100 00:13:36.196 } 00:13:36.196 }, 00:13:36.196 "base_bdevs_list": [ 00:13:36.196 { 00:13:36.196 "name": "spare", 00:13:36.196 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:36.196 "is_configured": true, 00:13:36.196 "data_offset": 0, 00:13:36.196 "data_size": 65536 00:13:36.196 }, 00:13:36.196 { 00:13:36.196 "name": "BaseBdev2", 00:13:36.196 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:36.196 "is_configured": true, 00:13:36.196 "data_offset": 0, 00:13:36.196 "data_size": 65536 00:13:36.196 } 00:13:36.196 ] 00:13:36.196 }' 00:13:36.196 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.196 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.196 [2024-11-17 01:33:44.450506] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:36.196 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.196 [2024-11-17 01:33:44.453271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.196 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.196 01:33:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.133 99.12 IOPS, 297.38 MiB/s [2024-11-17T01:33:45.593Z] 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.133 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.133 "name": "raid_bdev1", 00:13:37.133 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:37.133 "strip_size_kb": 0, 00:13:37.133 "state": "online", 00:13:37.133 "raid_level": "raid1", 00:13:37.133 "superblock": false, 00:13:37.133 "num_base_bdevs": 2, 00:13:37.133 "num_base_bdevs_discovered": 2, 00:13:37.133 "num_base_bdevs_operational": 2, 00:13:37.133 "base_bdevs_list": [ 00:13:37.134 { 00:13:37.134 "name": "spare", 00:13:37.134 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:37.134 "is_configured": true, 00:13:37.134 "data_offset": 0, 00:13:37.134 "data_size": 65536 00:13:37.134 }, 00:13:37.134 { 00:13:37.134 "name": "BaseBdev2", 00:13:37.134 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:37.134 "is_configured": true, 00:13:37.134 "data_offset": 0, 00:13:37.134 "data_size": 65536 00:13:37.134 } 00:13:37.134 ] 00:13:37.134 }' 00:13:37.134 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.393 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.393 "name": "raid_bdev1", 00:13:37.393 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:37.393 "strip_size_kb": 0, 00:13:37.393 "state": "online", 00:13:37.393 "raid_level": "raid1", 00:13:37.393 "superblock": false, 00:13:37.393 "num_base_bdevs": 2, 00:13:37.393 "num_base_bdevs_discovered": 2, 00:13:37.393 "num_base_bdevs_operational": 2, 00:13:37.393 "base_bdevs_list": [ 00:13:37.393 { 00:13:37.393 "name": "spare", 00:13:37.393 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:37.393 "is_configured": true, 00:13:37.393 "data_offset": 0, 00:13:37.394 "data_size": 65536 00:13:37.394 }, 00:13:37.394 { 00:13:37.394 "name": "BaseBdev2", 00:13:37.394 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:37.394 "is_configured": true, 00:13:37.394 "data_offset": 0, 00:13:37.394 "data_size": 65536 00:13:37.394 } 00:13:37.394 ] 00:13:37.394 }' 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.394 "name": "raid_bdev1", 00:13:37.394 "uuid": "6cfdf749-af66-47c3-ba3c-035fe84af5cb", 00:13:37.394 "strip_size_kb": 0, 00:13:37.394 "state": "online", 00:13:37.394 "raid_level": "raid1", 00:13:37.394 "superblock": false, 00:13:37.394 "num_base_bdevs": 2, 00:13:37.394 "num_base_bdevs_discovered": 2, 00:13:37.394 "num_base_bdevs_operational": 2, 00:13:37.394 "base_bdevs_list": [ 00:13:37.394 { 00:13:37.394 "name": "spare", 00:13:37.394 "uuid": "49a6906f-46aa-5d68-b135-ced5c5616654", 00:13:37.394 "is_configured": true, 00:13:37.394 "data_offset": 0, 00:13:37.394 "data_size": 65536 00:13:37.394 }, 00:13:37.394 { 00:13:37.394 "name": "BaseBdev2", 00:13:37.394 "uuid": "163d1e69-6299-5adf-a711-ece455f2d102", 00:13:37.394 "is_configured": true, 00:13:37.394 "data_offset": 0, 00:13:37.394 "data_size": 65536 00:13:37.394 } 00:13:37.394 ] 00:13:37.394 }' 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.394 01:33:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.962 [2024-11-17 01:33:46.175843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.962 [2024-11-17 01:33:46.175970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.962 00:13:37.962 Latency(us) 00:13:37.962 [2024-11-17T01:33:46.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.962 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:37.962 raid_bdev1 : 8.95 92.56 277.68 0.00 0.00 14900.46 298.70 113557.58 00:13:37.962 [2024-11-17T01:33:46.422Z] =================================================================================================================== 00:13:37.962 [2024-11-17T01:33:46.422Z] Total : 92.56 277.68 0.00 0.00 14900.46 298.70 113557.58 00:13:37.962 [2024-11-17 01:33:46.295099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.962 [2024-11-17 01:33:46.295182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.962 [2024-11-17 01:33:46.295281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.962 [2024-11-17 01:33:46.295341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:37.962 { 00:13:37.962 "results": [ 00:13:37.962 { 00:13:37.962 "job": "raid_bdev1", 00:13:37.962 "core_mask": "0x1", 00:13:37.962 "workload": "randrw", 00:13:37.962 "percentage": 50, 00:13:37.962 "status": "finished", 00:13:37.962 "queue_depth": 2, 00:13:37.962 "io_size": 3145728, 00:13:37.962 "runtime": 8.945471, 00:13:37.962 "iops": 92.56080535055114, 00:13:37.962 "mibps": 277.6824160516534, 00:13:37.962 "io_failed": 0, 00:13:37.962 "io_timeout": 0, 00:13:37.962 "avg_latency_us": 14900.458316984159, 00:13:37.962 "min_latency_us": 298.70393013100437, 00:13:37.962 "max_latency_us": 113557.57554585153 00:13:37.962 } 00:13:37.962 ], 00:13:37.962 "core_count": 1 00:13:37.962 } 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:37.962 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.963 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:38.222 /dev/nbd0 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.222 1+0 records in 00:13:38.222 1+0 records out 00:13:38.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580092 s, 7.1 MB/s 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.222 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:38.481 /dev/nbd1 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.481 1+0 records in 00:13:38.481 1+0 records out 00:13:38.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246763 s, 16.6 MB/s 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.481 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.482 01:33:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:38.482 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.482 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:38.482 01:33:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:38.741 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:38.741 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.741 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:38.741 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.741 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:38.741 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.741 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76210 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76210 ']' 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76210 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.000 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76210 00:13:39.259 killing process with pid 76210 00:13:39.259 Received shutdown signal, test time was about 10.132387 seconds 00:13:39.259 00:13:39.259 Latency(us) 00:13:39.259 [2024-11-17T01:33:47.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.259 [2024-11-17T01:33:47.719Z] =================================================================================================================== 00:13:39.259 [2024-11-17T01:33:47.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.259 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.259 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.259 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76210' 00:13:39.259 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76210 00:13:39.259 [2024-11-17 01:33:47.460208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.259 01:33:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76210 00:13:39.259 [2024-11-17 01:33:47.682142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:40.638 00:13:40.638 real 0m13.154s 00:13:40.638 user 0m16.164s 00:13:40.638 sys 0m1.545s 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.638 ************************************ 00:13:40.638 END TEST raid_rebuild_test_io 00:13:40.638 ************************************ 00:13:40.638 01:33:48 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:40.638 01:33:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:40.638 01:33:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.638 01:33:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.638 ************************************ 00:13:40.638 START TEST raid_rebuild_test_sb_io 00:13:40.638 ************************************ 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76604 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76604 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76604 ']' 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.638 01:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.638 [2024-11-17 01:33:48.941761] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:40.639 [2024-11-17 01:33:48.942243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76604 ] 00:13:40.639 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.639 Zero copy mechanism will not be used. 00:13:40.899 [2024-11-17 01:33:49.112617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.899 [2024-11-17 01:33:49.217784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.160 [2024-11-17 01:33:49.422114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.160 [2024-11-17 01:33:49.422174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.420 BaseBdev1_malloc 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.420 [2024-11-17 01:33:49.803731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.420 [2024-11-17 01:33:49.804056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.420 [2024-11-17 01:33:49.804096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:41.420 [2024-11-17 01:33:49.804109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.420 [2024-11-17 01:33:49.806212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.420 [2024-11-17 01:33:49.806249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.420 BaseBdev1 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.420 BaseBdev2_malloc 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.420 [2024-11-17 01:33:49.856615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:41.420 [2024-11-17 01:33:49.856682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.420 [2024-11-17 01:33:49.856700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.420 [2024-11-17 01:33:49.856713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.420 [2024-11-17 01:33:49.858645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.420 [2024-11-17 01:33:49.858680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.420 BaseBdev2 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.420 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.680 spare_malloc 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.680 spare_delay 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.680 [2024-11-17 01:33:49.936020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.680 [2024-11-17 01:33:49.936071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.680 [2024-11-17 01:33:49.936089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:41.680 [2024-11-17 01:33:49.936100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.680 [2024-11-17 01:33:49.938077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.680 [2024-11-17 01:33:49.938111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.680 spare 00:13:41.680 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.681 [2024-11-17 01:33:49.948055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.681 [2024-11-17 01:33:49.949777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.681 [2024-11-17 01:33:49.949938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:41.681 [2024-11-17 01:33:49.949954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.681 [2024-11-17 01:33:49.950176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:41.681 [2024-11-17 01:33:49.950343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:41.681 [2024-11-17 01:33:49.950367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:41.681 [2024-11-17 01:33:49.950514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.681 01:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.681 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.681 "name": "raid_bdev1", 00:13:41.681 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:41.681 "strip_size_kb": 0, 00:13:41.681 "state": "online", 00:13:41.681 "raid_level": "raid1", 00:13:41.681 "superblock": true, 00:13:41.681 "num_base_bdevs": 2, 00:13:41.681 "num_base_bdevs_discovered": 2, 00:13:41.681 "num_base_bdevs_operational": 2, 00:13:41.681 "base_bdevs_list": [ 00:13:41.681 { 00:13:41.681 "name": "BaseBdev1", 00:13:41.681 "uuid": "953ed831-a15c-59c9-a76d-283f210f9fae", 00:13:41.681 "is_configured": true, 00:13:41.681 "data_offset": 2048, 00:13:41.681 "data_size": 63488 00:13:41.681 }, 00:13:41.681 { 00:13:41.681 "name": "BaseBdev2", 00:13:41.681 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:41.681 "is_configured": true, 00:13:41.681 "data_offset": 2048, 00:13:41.681 "data_size": 63488 00:13:41.681 } 00:13:41.681 ] 00:13:41.681 }' 00:13:41.681 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.681 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.250 [2024-11-17 01:33:50.423480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.250 [2024-11-17 01:33:50.519092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.250 "name": "raid_bdev1", 00:13:42.250 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:42.250 "strip_size_kb": 0, 00:13:42.250 "state": "online", 00:13:42.250 "raid_level": "raid1", 00:13:42.250 "superblock": true, 00:13:42.250 "num_base_bdevs": 2, 00:13:42.250 "num_base_bdevs_discovered": 1, 00:13:42.250 "num_base_bdevs_operational": 1, 00:13:42.250 "base_bdevs_list": [ 00:13:42.250 { 00:13:42.250 "name": null, 00:13:42.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.250 "is_configured": false, 00:13:42.250 "data_offset": 0, 00:13:42.250 "data_size": 63488 00:13:42.250 }, 00:13:42.250 { 00:13:42.250 "name": "BaseBdev2", 00:13:42.250 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:42.250 "is_configured": true, 00:13:42.250 "data_offset": 2048, 00:13:42.250 "data_size": 63488 00:13:42.250 } 00:13:42.250 ] 00:13:42.250 }' 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.250 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.250 [2024-11-17 01:33:50.591555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:42.250 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:42.250 Zero copy mechanism will not be used. 00:13:42.250 Running I/O for 60 seconds... 00:13:42.818 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.818 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.818 01:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.818 [2024-11-17 01:33:51.001266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.818 [2024-11-17 01:33:51.038207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:42.818 01:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.818 01:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:42.818 [2024-11-17 01:33:51.040067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.818 [2024-11-17 01:33:51.153540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.818 [2024-11-17 01:33:51.153954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:43.077 [2024-11-17 01:33:51.376951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.077 [2024-11-17 01:33:51.377286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:43.336 249.00 IOPS, 747.00 MiB/s [2024-11-17T01:33:51.796Z] [2024-11-17 01:33:51.611314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:43.336 [2024-11-17 01:33:51.611701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:43.596 [2024-11-17 01:33:52.042658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.596 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.855 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.855 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.855 "name": "raid_bdev1", 00:13:43.855 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:43.855 "strip_size_kb": 0, 00:13:43.855 "state": "online", 00:13:43.855 "raid_level": "raid1", 00:13:43.855 "superblock": true, 00:13:43.855 "num_base_bdevs": 2, 00:13:43.855 "num_base_bdevs_discovered": 2, 00:13:43.855 "num_base_bdevs_operational": 2, 00:13:43.855 "process": { 00:13:43.855 "type": "rebuild", 00:13:43.855 "target": "spare", 00:13:43.855 "progress": { 00:13:43.855 "blocks": 14336, 00:13:43.855 "percent": 22 00:13:43.855 } 00:13:43.855 }, 00:13:43.855 "base_bdevs_list": [ 00:13:43.855 { 00:13:43.855 "name": "spare", 00:13:43.855 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:43.855 "is_configured": true, 00:13:43.855 "data_offset": 2048, 00:13:43.855 "data_size": 63488 00:13:43.855 }, 00:13:43.855 { 00:13:43.855 "name": "BaseBdev2", 00:13:43.855 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:43.855 "is_configured": true, 00:13:43.855 "data_offset": 2048, 00:13:43.855 "data_size": 63488 00:13:43.855 } 00:13:43.855 ] 00:13:43.856 }' 00:13:43.856 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.856 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.856 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.856 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.856 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.856 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.856 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.856 [2024-11-17 01:33:52.203685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.856 [2024-11-17 01:33:52.256056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:44.115 [2024-11-17 01:33:52.357270] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:44.115 [2024-11-17 01:33:52.359323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.115 [2024-11-17 01:33:52.359361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.115 [2024-11-17 01:33:52.359374] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:44.115 [2024-11-17 01:33:52.398391] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:44.115 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.115 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:44.115 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.115 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.116 "name": "raid_bdev1", 00:13:44.116 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:44.116 "strip_size_kb": 0, 00:13:44.116 "state": "online", 00:13:44.116 "raid_level": "raid1", 00:13:44.116 "superblock": true, 00:13:44.116 "num_base_bdevs": 2, 00:13:44.116 "num_base_bdevs_discovered": 1, 00:13:44.116 "num_base_bdevs_operational": 1, 00:13:44.116 "base_bdevs_list": [ 00:13:44.116 { 00:13:44.116 "name": null, 00:13:44.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.116 "is_configured": false, 00:13:44.116 "data_offset": 0, 00:13:44.116 "data_size": 63488 00:13:44.116 }, 00:13:44.116 { 00:13:44.116 "name": "BaseBdev2", 00:13:44.116 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:44.116 "is_configured": true, 00:13:44.116 "data_offset": 2048, 00:13:44.116 "data_size": 63488 00:13:44.116 } 00:13:44.116 ] 00:13:44.116 }' 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.116 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.634 177.50 IOPS, 532.50 MiB/s [2024-11-17T01:33:53.094Z] 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.634 "name": "raid_bdev1", 00:13:44.634 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:44.634 "strip_size_kb": 0, 00:13:44.634 "state": "online", 00:13:44.634 "raid_level": "raid1", 00:13:44.634 "superblock": true, 00:13:44.634 "num_base_bdevs": 2, 00:13:44.634 "num_base_bdevs_discovered": 1, 00:13:44.634 "num_base_bdevs_operational": 1, 00:13:44.634 "base_bdevs_list": [ 00:13:44.634 { 00:13:44.634 "name": null, 00:13:44.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.634 "is_configured": false, 00:13:44.634 "data_offset": 0, 00:13:44.634 "data_size": 63488 00:13:44.634 }, 00:13:44.634 { 00:13:44.634 "name": "BaseBdev2", 00:13:44.634 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:44.634 "is_configured": true, 00:13:44.634 "data_offset": 2048, 00:13:44.634 "data_size": 63488 00:13:44.634 } 00:13:44.634 ] 00:13:44.634 }' 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.634 01:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.634 [2024-11-17 01:33:53.011986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.634 01:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.634 [2024-11-17 01:33:53.048309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:44.634 01:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:44.634 [2024-11-17 01:33:53.050199] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.894 [2024-11-17 01:33:53.164064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.894 [2024-11-17 01:33:53.164663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.894 [2024-11-17 01:33:53.280323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.894 [2024-11-17 01:33:53.280570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:45.153 185.67 IOPS, 557.00 MiB/s [2024-11-17T01:33:53.613Z] [2024-11-17 01:33:53.610871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:45.413 [2024-11-17 01:33:53.730464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.672 [2024-11-17 01:33:54.055533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.672 "name": "raid_bdev1", 00:13:45.672 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:45.672 "strip_size_kb": 0, 00:13:45.672 "state": "online", 00:13:45.672 "raid_level": "raid1", 00:13:45.672 "superblock": true, 00:13:45.672 "num_base_bdevs": 2, 00:13:45.672 "num_base_bdevs_discovered": 2, 00:13:45.672 "num_base_bdevs_operational": 2, 00:13:45.672 "process": { 00:13:45.672 "type": "rebuild", 00:13:45.672 "target": "spare", 00:13:45.672 "progress": { 00:13:45.672 "blocks": 14336, 00:13:45.672 "percent": 22 00:13:45.672 } 00:13:45.672 }, 00:13:45.672 "base_bdevs_list": [ 00:13:45.672 { 00:13:45.672 "name": "spare", 00:13:45.672 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:45.672 "is_configured": true, 00:13:45.672 "data_offset": 2048, 00:13:45.672 "data_size": 63488 00:13:45.672 }, 00:13:45.672 { 00:13:45.672 "name": "BaseBdev2", 00:13:45.672 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:45.672 "is_configured": true, 00:13:45.672 "data_offset": 2048, 00:13:45.672 "data_size": 63488 00:13:45.672 } 00:13:45.672 ] 00:13:45.672 }' 00:13:45.672 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:45.932 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=408 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.932 "name": "raid_bdev1", 00:13:45.932 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:45.932 "strip_size_kb": 0, 00:13:45.932 "state": "online", 00:13:45.932 "raid_level": "raid1", 00:13:45.932 "superblock": true, 00:13:45.932 "num_base_bdevs": 2, 00:13:45.932 "num_base_bdevs_discovered": 2, 00:13:45.932 "num_base_bdevs_operational": 2, 00:13:45.932 "process": { 00:13:45.932 "type": "rebuild", 00:13:45.932 "target": "spare", 00:13:45.932 "progress": { 00:13:45.932 "blocks": 14336, 00:13:45.932 "percent": 22 00:13:45.932 } 00:13:45.932 }, 00:13:45.932 "base_bdevs_list": [ 00:13:45.932 { 00:13:45.932 "name": "spare", 00:13:45.932 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:45.932 "is_configured": true, 00:13:45.932 "data_offset": 2048, 00:13:45.932 "data_size": 63488 00:13:45.932 }, 00:13:45.932 { 00:13:45.932 "name": "BaseBdev2", 00:13:45.932 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:45.932 "is_configured": true, 00:13:45.932 "data_offset": 2048, 00:13:45.932 "data_size": 63488 00:13:45.932 } 00:13:45.932 ] 00:13:45.932 }' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.932 [2024-11-17 01:33:54.276707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.932 01:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.191 [2024-11-17 01:33:54.520723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:46.450 156.75 IOPS, 470.25 MiB/s [2024-11-17T01:33:54.910Z] [2024-11-17 01:33:54.762786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:46.709 [2024-11-17 01:33:55.088439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:46.969 [2024-11-17 01:33:55.202144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:46.969 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.970 "name": "raid_bdev1", 00:13:46.970 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:46.970 "strip_size_kb": 0, 00:13:46.970 "state": "online", 00:13:46.970 "raid_level": "raid1", 00:13:46.970 "superblock": true, 00:13:46.970 "num_base_bdevs": 2, 00:13:46.970 "num_base_bdevs_discovered": 2, 00:13:46.970 "num_base_bdevs_operational": 2, 00:13:46.970 "process": { 00:13:46.970 "type": "rebuild", 00:13:46.970 "target": "spare", 00:13:46.970 "progress": { 00:13:46.970 "blocks": 28672, 00:13:46.970 "percent": 45 00:13:46.970 } 00:13:46.970 }, 00:13:46.970 "base_bdevs_list": [ 00:13:46.970 { 00:13:46.970 "name": "spare", 00:13:46.970 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:46.970 "is_configured": true, 00:13:46.970 "data_offset": 2048, 00:13:46.970 "data_size": 63488 00:13:46.970 }, 00:13:46.970 { 00:13:46.970 "name": "BaseBdev2", 00:13:46.970 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:46.970 "is_configured": true, 00:13:46.970 "data_offset": 2048, 00:13:46.970 "data_size": 63488 00:13:46.970 } 00:13:46.970 ] 00:13:46.970 }' 00:13:46.970 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.229 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.229 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.229 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.229 01:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.229 133.80 IOPS, 401.40 MiB/s [2024-11-17T01:33:55.689Z] [2024-11-17 01:33:55.636823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:47.489 [2024-11-17 01:33:55.873464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:47.748 [2024-11-17 01:33:56.088612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:48.006 [2024-11-17 01:33:56.307313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:48.006 [2024-11-17 01:33:56.307697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.266 [2024-11-17 01:33:56.521937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.266 "name": "raid_bdev1", 00:13:48.266 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:48.266 "strip_size_kb": 0, 00:13:48.266 "state": "online", 00:13:48.266 "raid_level": "raid1", 00:13:48.266 "superblock": true, 00:13:48.266 "num_base_bdevs": 2, 00:13:48.266 "num_base_bdevs_discovered": 2, 00:13:48.266 "num_base_bdevs_operational": 2, 00:13:48.266 "process": { 00:13:48.266 "type": "rebuild", 00:13:48.266 "target": "spare", 00:13:48.266 "progress": { 00:13:48.266 "blocks": 45056, 00:13:48.266 "percent": 70 00:13:48.266 } 00:13:48.266 }, 00:13:48.266 "base_bdevs_list": [ 00:13:48.266 { 00:13:48.266 "name": "spare", 00:13:48.266 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:48.266 "is_configured": true, 00:13:48.266 "data_offset": 2048, 00:13:48.266 "data_size": 63488 00:13:48.266 }, 00:13:48.266 { 00:13:48.266 "name": "BaseBdev2", 00:13:48.266 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:48.266 "is_configured": true, 00:13:48.266 "data_offset": 2048, 00:13:48.266 "data_size": 63488 00:13:48.266 } 00:13:48.266 ] 00:13:48.266 }' 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.266 120.33 IOPS, 361.00 MiB/s [2024-11-17T01:33:56.726Z] 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.266 01:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.526 [2024-11-17 01:33:56.954882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:49.094 [2024-11-17 01:33:57.274637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:49.353 107.71 IOPS, 323.14 MiB/s [2024-11-17T01:33:57.813Z] [2024-11-17 01:33:57.598512] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.353 [2024-11-17 01:33:57.694927] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:49.353 [2024-11-17 01:33:57.696895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.353 "name": "raid_bdev1", 00:13:49.353 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:49.353 "strip_size_kb": 0, 00:13:49.353 "state": "online", 00:13:49.353 "raid_level": "raid1", 00:13:49.353 "superblock": true, 00:13:49.353 "num_base_bdevs": 2, 00:13:49.353 "num_base_bdevs_discovered": 2, 00:13:49.353 "num_base_bdevs_operational": 2, 00:13:49.353 "process": { 00:13:49.353 "type": "rebuild", 00:13:49.353 "target": "spare", 00:13:49.353 "progress": { 00:13:49.353 "blocks": 63488, 00:13:49.353 "percent": 100 00:13:49.353 } 00:13:49.353 }, 00:13:49.353 "base_bdevs_list": [ 00:13:49.353 { 00:13:49.353 "name": "spare", 00:13:49.353 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:49.353 "is_configured": true, 00:13:49.353 "data_offset": 2048, 00:13:49.353 "data_size": 63488 00:13:49.353 }, 00:13:49.353 { 00:13:49.353 "name": "BaseBdev2", 00:13:49.353 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:49.353 "is_configured": true, 00:13:49.353 "data_offset": 2048, 00:13:49.353 "data_size": 63488 00:13:49.353 } 00:13:49.353 ] 00:13:49.353 }' 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.353 01:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.601 99.00 IOPS, 297.00 MiB/s [2024-11-17T01:33:59.061Z] 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.601 "name": "raid_bdev1", 00:13:50.601 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:50.601 "strip_size_kb": 0, 00:13:50.601 "state": "online", 00:13:50.601 "raid_level": "raid1", 00:13:50.601 "superblock": true, 00:13:50.601 "num_base_bdevs": 2, 00:13:50.601 "num_base_bdevs_discovered": 2, 00:13:50.601 "num_base_bdevs_operational": 2, 00:13:50.601 "base_bdevs_list": [ 00:13:50.601 { 00:13:50.601 "name": "spare", 00:13:50.601 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:50.601 "is_configured": true, 00:13:50.601 "data_offset": 2048, 00:13:50.601 "data_size": 63488 00:13:50.601 }, 00:13:50.601 { 00:13:50.601 "name": "BaseBdev2", 00:13:50.601 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:50.601 "is_configured": true, 00:13:50.601 "data_offset": 2048, 00:13:50.601 "data_size": 63488 00:13:50.601 } 00:13:50.601 ] 00:13:50.601 }' 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.601 01:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.601 "name": "raid_bdev1", 00:13:50.601 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:50.601 "strip_size_kb": 0, 00:13:50.601 "state": "online", 00:13:50.601 "raid_level": "raid1", 00:13:50.601 "superblock": true, 00:13:50.601 "num_base_bdevs": 2, 00:13:50.601 "num_base_bdevs_discovered": 2, 00:13:50.602 "num_base_bdevs_operational": 2, 00:13:50.602 "base_bdevs_list": [ 00:13:50.602 { 00:13:50.602 "name": "spare", 00:13:50.602 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:50.602 "is_configured": true, 00:13:50.602 "data_offset": 2048, 00:13:50.602 "data_size": 63488 00:13:50.602 }, 00:13:50.602 { 00:13:50.602 "name": "BaseBdev2", 00:13:50.602 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:50.602 "is_configured": true, 00:13:50.602 "data_offset": 2048, 00:13:50.602 "data_size": 63488 00:13:50.602 } 00:13:50.602 ] 00:13:50.602 }' 00:13:50.602 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.602 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.602 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.863 "name": "raid_bdev1", 00:13:50.863 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:50.863 "strip_size_kb": 0, 00:13:50.863 "state": "online", 00:13:50.863 "raid_level": "raid1", 00:13:50.863 "superblock": true, 00:13:50.863 "num_base_bdevs": 2, 00:13:50.863 "num_base_bdevs_discovered": 2, 00:13:50.863 "num_base_bdevs_operational": 2, 00:13:50.863 "base_bdevs_list": [ 00:13:50.863 { 00:13:50.863 "name": "spare", 00:13:50.863 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:50.863 "is_configured": true, 00:13:50.863 "data_offset": 2048, 00:13:50.863 "data_size": 63488 00:13:50.863 }, 00:13:50.863 { 00:13:50.863 "name": "BaseBdev2", 00:13:50.863 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:50.863 "is_configured": true, 00:13:50.863 "data_offset": 2048, 00:13:50.863 "data_size": 63488 00:13:50.863 } 00:13:50.863 ] 00:13:50.863 }' 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.863 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.122 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:51.122 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.122 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.122 [2024-11-17 01:33:59.557200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.122 [2024-11-17 01:33:59.557231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.122 00:13:51.122 Latency(us) 00:13:51.122 [2024-11-17T01:33:59.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.122 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:51.122 raid_bdev1 : 8.99 92.22 276.67 0.00 0.00 16158.74 309.44 113099.68 00:13:51.122 [2024-11-17T01:33:59.582Z] =================================================================================================================== 00:13:51.122 [2024-11-17T01:33:59.582Z] Total : 92.22 276.67 0.00 0.00 16158.74 309.44 113099.68 00:13:51.381 { 00:13:51.381 "results": [ 00:13:51.381 { 00:13:51.381 "job": "raid_bdev1", 00:13:51.381 "core_mask": "0x1", 00:13:51.381 "workload": "randrw", 00:13:51.381 "percentage": 50, 00:13:51.381 "status": "finished", 00:13:51.381 "queue_depth": 2, 00:13:51.381 "io_size": 3145728, 00:13:51.381 "runtime": 8.989011, 00:13:51.381 "iops": 92.22371626867516, 00:13:51.381 "mibps": 276.6711488060255, 00:13:51.381 "io_failed": 0, 00:13:51.381 "io_timeout": 0, 00:13:51.381 "avg_latency_us": 16158.736289842553, 00:13:51.381 "min_latency_us": 309.435807860262, 00:13:51.381 "max_latency_us": 113099.68209606987 00:13:51.381 } 00:13:51.381 ], 00:13:51.381 "core_count": 1 00:13:51.381 } 00:13:51.381 [2024-11-17 01:33:59.584901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.381 [2024-11-17 01:33:59.584936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.381 [2024-11-17 01:33:59.585008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.381 [2024-11-17 01:33:59.585017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:51.381 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.381 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.382 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:51.382 /dev/nbd0 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.641 1+0 records in 00:13:51.641 1+0 records out 00:13:51.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289626 s, 14.1 MB/s 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.641 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.642 01:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:51.642 /dev/nbd1 00:13:51.642 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:51.901 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.902 1+0 records in 00:13:51.902 1+0 records out 00:13:51.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260327 s, 15.7 MB/s 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.902 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:52.161 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.162 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.422 [2024-11-17 01:34:00.706434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.422 [2024-11-17 01:34:00.706505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.422 [2024-11-17 01:34:00.706527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:52.422 [2024-11-17 01:34:00.706536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.422 [2024-11-17 01:34:00.708682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.422 [2024-11-17 01:34:00.708722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.422 [2024-11-17 01:34:00.708827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:52.422 [2024-11-17 01:34:00.708879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.422 [2024-11-17 01:34:00.709035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.422 spare 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.422 [2024-11-17 01:34:00.808938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:52.422 [2024-11-17 01:34:00.808970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.422 [2024-11-17 01:34:00.809227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:52.422 [2024-11-17 01:34:00.809406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:52.422 [2024-11-17 01:34:00.809421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:52.422 [2024-11-17 01:34:00.809594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.422 "name": "raid_bdev1", 00:13:52.422 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:52.422 "strip_size_kb": 0, 00:13:52.422 "state": "online", 00:13:52.422 "raid_level": "raid1", 00:13:52.422 "superblock": true, 00:13:52.422 "num_base_bdevs": 2, 00:13:52.422 "num_base_bdevs_discovered": 2, 00:13:52.422 "num_base_bdevs_operational": 2, 00:13:52.422 "base_bdevs_list": [ 00:13:52.422 { 00:13:52.422 "name": "spare", 00:13:52.422 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:52.422 "is_configured": true, 00:13:52.422 "data_offset": 2048, 00:13:52.422 "data_size": 63488 00:13:52.422 }, 00:13:52.422 { 00:13:52.422 "name": "BaseBdev2", 00:13:52.422 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:52.422 "is_configured": true, 00:13:52.422 "data_offset": 2048, 00:13:52.422 "data_size": 63488 00:13:52.422 } 00:13:52.422 ] 00:13:52.422 }' 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.422 01:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.992 "name": "raid_bdev1", 00:13:52.992 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:52.992 "strip_size_kb": 0, 00:13:52.992 "state": "online", 00:13:52.992 "raid_level": "raid1", 00:13:52.992 "superblock": true, 00:13:52.992 "num_base_bdevs": 2, 00:13:52.992 "num_base_bdevs_discovered": 2, 00:13:52.992 "num_base_bdevs_operational": 2, 00:13:52.992 "base_bdevs_list": [ 00:13:52.992 { 00:13:52.992 "name": "spare", 00:13:52.992 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:52.992 "is_configured": true, 00:13:52.992 "data_offset": 2048, 00:13:52.992 "data_size": 63488 00:13:52.992 }, 00:13:52.992 { 00:13:52.992 "name": "BaseBdev2", 00:13:52.992 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:52.992 "is_configured": true, 00:13:52.992 "data_offset": 2048, 00:13:52.992 "data_size": 63488 00:13:52.992 } 00:13:52.992 ] 00:13:52.992 }' 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.992 [2024-11-17 01:34:01.345428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.992 "name": "raid_bdev1", 00:13:52.992 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:52.992 "strip_size_kb": 0, 00:13:52.992 "state": "online", 00:13:52.992 "raid_level": "raid1", 00:13:52.992 "superblock": true, 00:13:52.992 "num_base_bdevs": 2, 00:13:52.992 "num_base_bdevs_discovered": 1, 00:13:52.992 "num_base_bdevs_operational": 1, 00:13:52.992 "base_bdevs_list": [ 00:13:52.992 { 00:13:52.992 "name": null, 00:13:52.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.992 "is_configured": false, 00:13:52.992 "data_offset": 0, 00:13:52.992 "data_size": 63488 00:13:52.992 }, 00:13:52.992 { 00:13:52.992 "name": "BaseBdev2", 00:13:52.992 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:52.992 "is_configured": true, 00:13:52.992 "data_offset": 2048, 00:13:52.992 "data_size": 63488 00:13:52.992 } 00:13:52.992 ] 00:13:52.992 }' 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.992 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.561 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:53.561 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.561 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.561 [2024-11-17 01:34:01.804718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.561 [2024-11-17 01:34:01.804917] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:53.561 [2024-11-17 01:34:01.804934] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:53.561 [2024-11-17 01:34:01.804969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.561 [2024-11-17 01:34:01.820705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:53.561 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.561 01:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:53.561 [2024-11-17 01:34:01.822510] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.500 "name": "raid_bdev1", 00:13:54.500 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:54.500 "strip_size_kb": 0, 00:13:54.500 "state": "online", 00:13:54.500 "raid_level": "raid1", 00:13:54.500 "superblock": true, 00:13:54.500 "num_base_bdevs": 2, 00:13:54.500 "num_base_bdevs_discovered": 2, 00:13:54.500 "num_base_bdevs_operational": 2, 00:13:54.500 "process": { 00:13:54.500 "type": "rebuild", 00:13:54.500 "target": "spare", 00:13:54.500 "progress": { 00:13:54.500 "blocks": 20480, 00:13:54.500 "percent": 32 00:13:54.500 } 00:13:54.500 }, 00:13:54.500 "base_bdevs_list": [ 00:13:54.500 { 00:13:54.500 "name": "spare", 00:13:54.500 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:54.500 "is_configured": true, 00:13:54.500 "data_offset": 2048, 00:13:54.500 "data_size": 63488 00:13:54.500 }, 00:13:54.500 { 00:13:54.500 "name": "BaseBdev2", 00:13:54.500 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:54.500 "is_configured": true, 00:13:54.500 "data_offset": 2048, 00:13:54.500 "data_size": 63488 00:13:54.500 } 00:13:54.500 ] 00:13:54.500 }' 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.500 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.759 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.759 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:54.759 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.759 01:34:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.759 [2024-11-17 01:34:02.978393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.759 [2024-11-17 01:34:03.027260] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.759 [2024-11-17 01:34:03.027333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.759 [2024-11-17 01:34:03.027348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.759 [2024-11-17 01:34:03.027356] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.759 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.760 "name": "raid_bdev1", 00:13:54.760 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:54.760 "strip_size_kb": 0, 00:13:54.760 "state": "online", 00:13:54.760 "raid_level": "raid1", 00:13:54.760 "superblock": true, 00:13:54.760 "num_base_bdevs": 2, 00:13:54.760 "num_base_bdevs_discovered": 1, 00:13:54.760 "num_base_bdevs_operational": 1, 00:13:54.760 "base_bdevs_list": [ 00:13:54.760 { 00:13:54.760 "name": null, 00:13:54.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.760 "is_configured": false, 00:13:54.760 "data_offset": 0, 00:13:54.760 "data_size": 63488 00:13:54.760 }, 00:13:54.760 { 00:13:54.760 "name": "BaseBdev2", 00:13:54.760 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:54.760 "is_configured": true, 00:13:54.760 "data_offset": 2048, 00:13:54.760 "data_size": 63488 00:13:54.760 } 00:13:54.760 ] 00:13:54.760 }' 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.760 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.328 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.328 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.328 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.328 [2024-11-17 01:34:03.535197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.328 [2024-11-17 01:34:03.535267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.328 [2024-11-17 01:34:03.535291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:55.328 [2024-11-17 01:34:03.535302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.328 [2024-11-17 01:34:03.535826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.328 [2024-11-17 01:34:03.535858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.328 [2024-11-17 01:34:03.535956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:55.328 [2024-11-17 01:34:03.535978] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:55.328 [2024-11-17 01:34:03.535988] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:55.328 [2024-11-17 01:34:03.536009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.328 [2024-11-17 01:34:03.552559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:55.328 spare 00:13:55.328 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.328 01:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:55.328 [2024-11-17 01:34:03.554424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.266 "name": "raid_bdev1", 00:13:56.266 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:56.266 "strip_size_kb": 0, 00:13:56.266 "state": "online", 00:13:56.266 "raid_level": "raid1", 00:13:56.266 "superblock": true, 00:13:56.266 "num_base_bdevs": 2, 00:13:56.266 "num_base_bdevs_discovered": 2, 00:13:56.266 "num_base_bdevs_operational": 2, 00:13:56.266 "process": { 00:13:56.266 "type": "rebuild", 00:13:56.266 "target": "spare", 00:13:56.266 "progress": { 00:13:56.266 "blocks": 20480, 00:13:56.266 "percent": 32 00:13:56.266 } 00:13:56.266 }, 00:13:56.266 "base_bdevs_list": [ 00:13:56.266 { 00:13:56.266 "name": "spare", 00:13:56.266 "uuid": "c68b90d7-c855-5948-a999-195fd7af8b9e", 00:13:56.266 "is_configured": true, 00:13:56.266 "data_offset": 2048, 00:13:56.266 "data_size": 63488 00:13:56.266 }, 00:13:56.266 { 00:13:56.266 "name": "BaseBdev2", 00:13:56.266 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:56.266 "is_configured": true, 00:13:56.266 "data_offset": 2048, 00:13:56.266 "data_size": 63488 00:13:56.266 } 00:13:56.266 ] 00:13:56.266 }' 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.266 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.266 [2024-11-17 01:34:04.709823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.525 [2024-11-17 01:34:04.759212] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:56.525 [2024-11-17 01:34:04.759267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.525 [2024-11-17 01:34:04.759299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.525 [2024-11-17 01:34:04.759305] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.525 "name": "raid_bdev1", 00:13:56.525 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:56.525 "strip_size_kb": 0, 00:13:56.525 "state": "online", 00:13:56.525 "raid_level": "raid1", 00:13:56.525 "superblock": true, 00:13:56.525 "num_base_bdevs": 2, 00:13:56.525 "num_base_bdevs_discovered": 1, 00:13:56.525 "num_base_bdevs_operational": 1, 00:13:56.525 "base_bdevs_list": [ 00:13:56.525 { 00:13:56.525 "name": null, 00:13:56.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.525 "is_configured": false, 00:13:56.525 "data_offset": 0, 00:13:56.525 "data_size": 63488 00:13:56.525 }, 00:13:56.525 { 00:13:56.525 "name": "BaseBdev2", 00:13:56.525 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:56.525 "is_configured": true, 00:13:56.525 "data_offset": 2048, 00:13:56.525 "data_size": 63488 00:13:56.525 } 00:13:56.525 ] 00:13:56.525 }' 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.525 01:34:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.095 "name": "raid_bdev1", 00:13:57.095 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:57.095 "strip_size_kb": 0, 00:13:57.095 "state": "online", 00:13:57.095 "raid_level": "raid1", 00:13:57.095 "superblock": true, 00:13:57.095 "num_base_bdevs": 2, 00:13:57.095 "num_base_bdevs_discovered": 1, 00:13:57.095 "num_base_bdevs_operational": 1, 00:13:57.095 "base_bdevs_list": [ 00:13:57.095 { 00:13:57.095 "name": null, 00:13:57.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.095 "is_configured": false, 00:13:57.095 "data_offset": 0, 00:13:57.095 "data_size": 63488 00:13:57.095 }, 00:13:57.095 { 00:13:57.095 "name": "BaseBdev2", 00:13:57.095 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:57.095 "is_configured": true, 00:13:57.095 "data_offset": 2048, 00:13:57.095 "data_size": 63488 00:13:57.095 } 00:13:57.095 ] 00:13:57.095 }' 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.095 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.095 [2024-11-17 01:34:05.382489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:57.095 [2024-11-17 01:34:05.382538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.095 [2024-11-17 01:34:05.382562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:57.095 [2024-11-17 01:34:05.382571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.095 [2024-11-17 01:34:05.383032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.095 [2024-11-17 01:34:05.383059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.096 [2024-11-17 01:34:05.383149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:57.096 [2024-11-17 01:34:05.383170] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:57.096 [2024-11-17 01:34:05.383180] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:57.096 [2024-11-17 01:34:05.383192] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:57.096 BaseBdev1 00:13:57.096 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.096 01:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.038 "name": "raid_bdev1", 00:13:58.038 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:58.038 "strip_size_kb": 0, 00:13:58.038 "state": "online", 00:13:58.038 "raid_level": "raid1", 00:13:58.038 "superblock": true, 00:13:58.038 "num_base_bdevs": 2, 00:13:58.038 "num_base_bdevs_discovered": 1, 00:13:58.038 "num_base_bdevs_operational": 1, 00:13:58.038 "base_bdevs_list": [ 00:13:58.038 { 00:13:58.038 "name": null, 00:13:58.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.038 "is_configured": false, 00:13:58.038 "data_offset": 0, 00:13:58.038 "data_size": 63488 00:13:58.038 }, 00:13:58.038 { 00:13:58.038 "name": "BaseBdev2", 00:13:58.038 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:58.038 "is_configured": true, 00:13:58.038 "data_offset": 2048, 00:13:58.038 "data_size": 63488 00:13:58.038 } 00:13:58.038 ] 00:13:58.038 }' 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.038 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.631 "name": "raid_bdev1", 00:13:58.631 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:58.631 "strip_size_kb": 0, 00:13:58.631 "state": "online", 00:13:58.631 "raid_level": "raid1", 00:13:58.631 "superblock": true, 00:13:58.631 "num_base_bdevs": 2, 00:13:58.631 "num_base_bdevs_discovered": 1, 00:13:58.631 "num_base_bdevs_operational": 1, 00:13:58.631 "base_bdevs_list": [ 00:13:58.631 { 00:13:58.631 "name": null, 00:13:58.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.631 "is_configured": false, 00:13:58.631 "data_offset": 0, 00:13:58.631 "data_size": 63488 00:13:58.631 }, 00:13:58.631 { 00:13:58.631 "name": "BaseBdev2", 00:13:58.631 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:58.631 "is_configured": true, 00:13:58.631 "data_offset": 2048, 00:13:58.631 "data_size": 63488 00:13:58.631 } 00:13:58.631 ] 00:13:58.631 }' 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.631 [2024-11-17 01:34:06.983913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.631 [2024-11-17 01:34:06.984132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:58.631 [2024-11-17 01:34:06.984198] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:58.631 request: 00:13:58.631 { 00:13:58.631 "base_bdev": "BaseBdev1", 00:13:58.631 "raid_bdev": "raid_bdev1", 00:13:58.631 "method": "bdev_raid_add_base_bdev", 00:13:58.631 "req_id": 1 00:13:58.631 } 00:13:58.631 Got JSON-RPC error response 00:13:58.631 response: 00:13:58.631 { 00:13:58.631 "code": -22, 00:13:58.631 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:58.631 } 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.631 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.632 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.632 01:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.572 01:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.572 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.572 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.572 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.572 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.572 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.831 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.831 "name": "raid_bdev1", 00:13:59.831 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:13:59.831 "strip_size_kb": 0, 00:13:59.831 "state": "online", 00:13:59.831 "raid_level": "raid1", 00:13:59.831 "superblock": true, 00:13:59.831 "num_base_bdevs": 2, 00:13:59.831 "num_base_bdevs_discovered": 1, 00:13:59.831 "num_base_bdevs_operational": 1, 00:13:59.831 "base_bdevs_list": [ 00:13:59.831 { 00:13:59.831 "name": null, 00:13:59.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.831 "is_configured": false, 00:13:59.831 "data_offset": 0, 00:13:59.831 "data_size": 63488 00:13:59.831 }, 00:13:59.831 { 00:13:59.831 "name": "BaseBdev2", 00:13:59.831 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:13:59.831 "is_configured": true, 00:13:59.831 "data_offset": 2048, 00:13:59.831 "data_size": 63488 00:13:59.831 } 00:13:59.831 ] 00:13:59.831 }' 00:13:59.831 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.831 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.090 "name": "raid_bdev1", 00:14:00.090 "uuid": "5820bd00-4e4e-473a-b015-7bb564f84da2", 00:14:00.090 "strip_size_kb": 0, 00:14:00.090 "state": "online", 00:14:00.090 "raid_level": "raid1", 00:14:00.090 "superblock": true, 00:14:00.090 "num_base_bdevs": 2, 00:14:00.090 "num_base_bdevs_discovered": 1, 00:14:00.090 "num_base_bdevs_operational": 1, 00:14:00.090 "base_bdevs_list": [ 00:14:00.090 { 00:14:00.090 "name": null, 00:14:00.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.090 "is_configured": false, 00:14:00.090 "data_offset": 0, 00:14:00.090 "data_size": 63488 00:14:00.090 }, 00:14:00.090 { 00:14:00.090 "name": "BaseBdev2", 00:14:00.090 "uuid": "7df58284-b80e-5fac-b01f-568d01a203a6", 00:14:00.090 "is_configured": true, 00:14:00.090 "data_offset": 2048, 00:14:00.090 "data_size": 63488 00:14:00.090 } 00:14:00.090 ] 00:14:00.090 }' 00:14:00.090 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76604 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76604 ']' 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76604 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76604 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.349 killing process with pid 76604 00:14:00.349 Received shutdown signal, test time was about 18.068568 seconds 00:14:00.349 00:14:00.349 Latency(us) 00:14:00.349 [2024-11-17T01:34:08.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.349 [2024-11-17T01:34:08.809Z] =================================================================================================================== 00:14:00.349 [2024-11-17T01:34:08.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76604' 00:14:00.349 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76604 00:14:00.350 [2024-11-17 01:34:08.627181] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.350 [2024-11-17 01:34:08.627315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.350 01:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76604 00:14:00.350 [2024-11-17 01:34:08.627369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.350 [2024-11-17 01:34:08.627385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:00.609 [2024-11-17 01:34:08.852655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.546 01:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:01.546 ************************************ 00:14:01.546 END TEST raid_rebuild_test_sb_io 00:14:01.546 ************************************ 00:14:01.546 00:14:01.546 real 0m21.092s 00:14:01.546 user 0m27.397s 00:14:01.546 sys 0m2.156s 00:14:01.546 01:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.546 01:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.546 01:34:09 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:01.547 01:34:09 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:01.547 01:34:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:01.547 01:34:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.547 01:34:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.806 ************************************ 00:14:01.806 START TEST raid_rebuild_test 00:14:01.806 ************************************ 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77314 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77314 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77314 ']' 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.806 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.806 [2024-11-17 01:34:10.119019] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:01.806 [2024-11-17 01:34:10.119270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:01.806 Zero copy mechanism will not be used. 00:14:01.806 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77314 ] 00:14:02.066 [2024-11-17 01:34:10.295001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.066 [2024-11-17 01:34:10.400246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.326 [2024-11-17 01:34:10.598806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.326 [2024-11-17 01:34:10.598942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.586 BaseBdev1_malloc 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.586 [2024-11-17 01:34:10.972959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.586 [2024-11-17 01:34:10.973092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.586 [2024-11-17 01:34:10.973134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:02.586 [2024-11-17 01:34:10.973170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.586 [2024-11-17 01:34:10.975257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.586 [2024-11-17 01:34:10.975334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.586 BaseBdev1 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.586 01:34:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.586 BaseBdev2_malloc 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.586 [2024-11-17 01:34:11.027207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:02.586 [2024-11-17 01:34:11.027271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.586 [2024-11-17 01:34:11.027289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:02.586 [2024-11-17 01:34:11.027300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.586 [2024-11-17 01:34:11.029342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.586 [2024-11-17 01:34:11.029382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:02.586 BaseBdev2 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.586 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 BaseBdev3_malloc 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 [2024-11-17 01:34:11.094782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:02.847 [2024-11-17 01:34:11.094832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.847 [2024-11-17 01:34:11.094851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.847 [2024-11-17 01:34:11.094862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.847 [2024-11-17 01:34:11.096798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.847 [2024-11-17 01:34:11.096837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:02.847 BaseBdev3 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 BaseBdev4_malloc 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 [2024-11-17 01:34:11.148407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:02.847 [2024-11-17 01:34:11.148515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.847 [2024-11-17 01:34:11.148551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:02.847 [2024-11-17 01:34:11.148597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.847 [2024-11-17 01:34:11.150564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.847 [2024-11-17 01:34:11.150636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:02.847 BaseBdev4 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 spare_malloc 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 spare_delay 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 [2024-11-17 01:34:11.215937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.847 [2024-11-17 01:34:11.216042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.847 [2024-11-17 01:34:11.216097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:02.847 [2024-11-17 01:34:11.216130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.847 [2024-11-17 01:34:11.218138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.847 [2024-11-17 01:34:11.218173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.847 spare 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.847 [2024-11-17 01:34:11.227964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.847 [2024-11-17 01:34:11.229732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.847 [2024-11-17 01:34:11.229847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.847 [2024-11-17 01:34:11.229935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.847 [2024-11-17 01:34:11.230042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:02.847 [2024-11-17 01:34:11.230083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:02.847 [2024-11-17 01:34:11.230335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:02.847 [2024-11-17 01:34:11.230543] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:02.847 [2024-11-17 01:34:11.230587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:02.847 [2024-11-17 01:34:11.230782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.847 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.848 "name": "raid_bdev1", 00:14:02.848 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:02.848 "strip_size_kb": 0, 00:14:02.848 "state": "online", 00:14:02.848 "raid_level": "raid1", 00:14:02.848 "superblock": false, 00:14:02.848 "num_base_bdevs": 4, 00:14:02.848 "num_base_bdevs_discovered": 4, 00:14:02.848 "num_base_bdevs_operational": 4, 00:14:02.848 "base_bdevs_list": [ 00:14:02.848 { 00:14:02.848 "name": "BaseBdev1", 00:14:02.848 "uuid": "7e8c52e7-3ccb-5039-abfa-749ad2ff6181", 00:14:02.848 "is_configured": true, 00:14:02.848 "data_offset": 0, 00:14:02.848 "data_size": 65536 00:14:02.848 }, 00:14:02.848 { 00:14:02.848 "name": "BaseBdev2", 00:14:02.848 "uuid": "229d7bb0-2fa4-584e-93e5-c8b04a97aed9", 00:14:02.848 "is_configured": true, 00:14:02.848 "data_offset": 0, 00:14:02.848 "data_size": 65536 00:14:02.848 }, 00:14:02.848 { 00:14:02.848 "name": "BaseBdev3", 00:14:02.848 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:02.848 "is_configured": true, 00:14:02.848 "data_offset": 0, 00:14:02.848 "data_size": 65536 00:14:02.848 }, 00:14:02.848 { 00:14:02.848 "name": "BaseBdev4", 00:14:02.848 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:02.848 "is_configured": true, 00:14:02.848 "data_offset": 0, 00:14:02.848 "data_size": 65536 00:14:02.848 } 00:14:02.848 ] 00:14:02.848 }' 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.848 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.416 [2024-11-17 01:34:11.691536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.416 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.417 01:34:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:03.676 [2024-11-17 01:34:11.950868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:03.676 /dev/nbd0 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.676 1+0 records in 00:14:03.676 1+0 records out 00:14:03.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457355 s, 9.0 MB/s 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:03.676 01:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:10.246 65536+0 records in 00:14:10.246 65536+0 records out 00:14:10.246 33554432 bytes (34 MB, 32 MiB) copied, 5.55962 s, 6.0 MB/s 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:10.246 [2024-11-17 01:34:17.807236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.246 [2024-11-17 01:34:17.819293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.246 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.246 "name": "raid_bdev1", 00:14:10.246 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:10.246 "strip_size_kb": 0, 00:14:10.246 "state": "online", 00:14:10.246 "raid_level": "raid1", 00:14:10.246 "superblock": false, 00:14:10.246 "num_base_bdevs": 4, 00:14:10.246 "num_base_bdevs_discovered": 3, 00:14:10.246 "num_base_bdevs_operational": 3, 00:14:10.246 "base_bdevs_list": [ 00:14:10.246 { 00:14:10.246 "name": null, 00:14:10.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.246 "is_configured": false, 00:14:10.246 "data_offset": 0, 00:14:10.246 "data_size": 65536 00:14:10.246 }, 00:14:10.246 { 00:14:10.246 "name": "BaseBdev2", 00:14:10.247 "uuid": "229d7bb0-2fa4-584e-93e5-c8b04a97aed9", 00:14:10.247 "is_configured": true, 00:14:10.247 "data_offset": 0, 00:14:10.247 "data_size": 65536 00:14:10.247 }, 00:14:10.247 { 00:14:10.247 "name": "BaseBdev3", 00:14:10.247 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:10.247 "is_configured": true, 00:14:10.247 "data_offset": 0, 00:14:10.247 "data_size": 65536 00:14:10.247 }, 00:14:10.247 { 00:14:10.247 "name": "BaseBdev4", 00:14:10.247 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:10.247 "is_configured": true, 00:14:10.247 "data_offset": 0, 00:14:10.247 "data_size": 65536 00:14:10.247 } 00:14:10.247 ] 00:14:10.247 }' 00:14:10.247 01:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.247 01:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.247 01:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:10.247 01:34:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.247 01:34:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.247 [2024-11-17 01:34:18.262547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.247 [2024-11-17 01:34:18.277812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:10.247 01:34:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.247 01:34:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:10.247 [2024-11-17 01:34:18.279633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.184 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.184 "name": "raid_bdev1", 00:14:11.184 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:11.184 "strip_size_kb": 0, 00:14:11.184 "state": "online", 00:14:11.184 "raid_level": "raid1", 00:14:11.184 "superblock": false, 00:14:11.184 "num_base_bdevs": 4, 00:14:11.184 "num_base_bdevs_discovered": 4, 00:14:11.184 "num_base_bdevs_operational": 4, 00:14:11.184 "process": { 00:14:11.184 "type": "rebuild", 00:14:11.184 "target": "spare", 00:14:11.184 "progress": { 00:14:11.184 "blocks": 20480, 00:14:11.184 "percent": 31 00:14:11.184 } 00:14:11.184 }, 00:14:11.184 "base_bdevs_list": [ 00:14:11.184 { 00:14:11.184 "name": "spare", 00:14:11.185 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:11.185 "is_configured": true, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 }, 00:14:11.185 { 00:14:11.185 "name": "BaseBdev2", 00:14:11.185 "uuid": "229d7bb0-2fa4-584e-93e5-c8b04a97aed9", 00:14:11.185 "is_configured": true, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 }, 00:14:11.185 { 00:14:11.185 "name": "BaseBdev3", 00:14:11.185 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:11.185 "is_configured": true, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 }, 00:14:11.185 { 00:14:11.185 "name": "BaseBdev4", 00:14:11.185 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:11.185 "is_configured": true, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 } 00:14:11.185 ] 00:14:11.185 }' 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.185 [2024-11-17 01:34:19.414913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.185 [2024-11-17 01:34:19.484304] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.185 [2024-11-17 01:34:19.484441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.185 [2024-11-17 01:34:19.484479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.185 [2024-11-17 01:34:19.484502] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.185 "name": "raid_bdev1", 00:14:11.185 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:11.185 "strip_size_kb": 0, 00:14:11.185 "state": "online", 00:14:11.185 "raid_level": "raid1", 00:14:11.185 "superblock": false, 00:14:11.185 "num_base_bdevs": 4, 00:14:11.185 "num_base_bdevs_discovered": 3, 00:14:11.185 "num_base_bdevs_operational": 3, 00:14:11.185 "base_bdevs_list": [ 00:14:11.185 { 00:14:11.185 "name": null, 00:14:11.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.185 "is_configured": false, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 }, 00:14:11.185 { 00:14:11.185 "name": "BaseBdev2", 00:14:11.185 "uuid": "229d7bb0-2fa4-584e-93e5-c8b04a97aed9", 00:14:11.185 "is_configured": true, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 }, 00:14:11.185 { 00:14:11.185 "name": "BaseBdev3", 00:14:11.185 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:11.185 "is_configured": true, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 }, 00:14:11.185 { 00:14:11.185 "name": "BaseBdev4", 00:14:11.185 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:11.185 "is_configured": true, 00:14:11.185 "data_offset": 0, 00:14:11.185 "data_size": 65536 00:14:11.185 } 00:14:11.185 ] 00:14:11.185 }' 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.185 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.754 "name": "raid_bdev1", 00:14:11.754 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:11.754 "strip_size_kb": 0, 00:14:11.754 "state": "online", 00:14:11.754 "raid_level": "raid1", 00:14:11.754 "superblock": false, 00:14:11.754 "num_base_bdevs": 4, 00:14:11.754 "num_base_bdevs_discovered": 3, 00:14:11.754 "num_base_bdevs_operational": 3, 00:14:11.754 "base_bdevs_list": [ 00:14:11.754 { 00:14:11.754 "name": null, 00:14:11.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.754 "is_configured": false, 00:14:11.754 "data_offset": 0, 00:14:11.754 "data_size": 65536 00:14:11.754 }, 00:14:11.754 { 00:14:11.754 "name": "BaseBdev2", 00:14:11.754 "uuid": "229d7bb0-2fa4-584e-93e5-c8b04a97aed9", 00:14:11.754 "is_configured": true, 00:14:11.754 "data_offset": 0, 00:14:11.754 "data_size": 65536 00:14:11.754 }, 00:14:11.754 { 00:14:11.754 "name": "BaseBdev3", 00:14:11.754 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:11.754 "is_configured": true, 00:14:11.754 "data_offset": 0, 00:14:11.754 "data_size": 65536 00:14:11.754 }, 00:14:11.754 { 00:14:11.754 "name": "BaseBdev4", 00:14:11.754 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:11.754 "is_configured": true, 00:14:11.754 "data_offset": 0, 00:14:11.754 "data_size": 65536 00:14:11.754 } 00:14:11.754 ] 00:14:11.754 }' 00:14:11.754 01:34:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.754 [2024-11-17 01:34:20.044321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.754 [2024-11-17 01:34:20.058721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.754 01:34:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:11.754 [2024-11-17 01:34:20.060615] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.693 "name": "raid_bdev1", 00:14:12.693 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:12.693 "strip_size_kb": 0, 00:14:12.693 "state": "online", 00:14:12.693 "raid_level": "raid1", 00:14:12.693 "superblock": false, 00:14:12.693 "num_base_bdevs": 4, 00:14:12.693 "num_base_bdevs_discovered": 4, 00:14:12.693 "num_base_bdevs_operational": 4, 00:14:12.693 "process": { 00:14:12.693 "type": "rebuild", 00:14:12.693 "target": "spare", 00:14:12.693 "progress": { 00:14:12.693 "blocks": 20480, 00:14:12.693 "percent": 31 00:14:12.693 } 00:14:12.693 }, 00:14:12.693 "base_bdevs_list": [ 00:14:12.693 { 00:14:12.693 "name": "spare", 00:14:12.693 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:12.693 "is_configured": true, 00:14:12.693 "data_offset": 0, 00:14:12.693 "data_size": 65536 00:14:12.693 }, 00:14:12.693 { 00:14:12.693 "name": "BaseBdev2", 00:14:12.693 "uuid": "229d7bb0-2fa4-584e-93e5-c8b04a97aed9", 00:14:12.693 "is_configured": true, 00:14:12.693 "data_offset": 0, 00:14:12.693 "data_size": 65536 00:14:12.693 }, 00:14:12.693 { 00:14:12.693 "name": "BaseBdev3", 00:14:12.693 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:12.693 "is_configured": true, 00:14:12.693 "data_offset": 0, 00:14:12.693 "data_size": 65536 00:14:12.693 }, 00:14:12.693 { 00:14:12.693 "name": "BaseBdev4", 00:14:12.693 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:12.693 "is_configured": true, 00:14:12.693 "data_offset": 0, 00:14:12.693 "data_size": 65536 00:14:12.693 } 00:14:12.693 ] 00:14:12.693 }' 00:14:12.693 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.952 [2024-11-17 01:34:21.199856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.952 [2024-11-17 01:34:21.265393] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.952 "name": "raid_bdev1", 00:14:12.952 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:12.952 "strip_size_kb": 0, 00:14:12.952 "state": "online", 00:14:12.952 "raid_level": "raid1", 00:14:12.952 "superblock": false, 00:14:12.952 "num_base_bdevs": 4, 00:14:12.952 "num_base_bdevs_discovered": 3, 00:14:12.952 "num_base_bdevs_operational": 3, 00:14:12.952 "process": { 00:14:12.952 "type": "rebuild", 00:14:12.952 "target": "spare", 00:14:12.952 "progress": { 00:14:12.952 "blocks": 24576, 00:14:12.952 "percent": 37 00:14:12.952 } 00:14:12.952 }, 00:14:12.952 "base_bdevs_list": [ 00:14:12.952 { 00:14:12.952 "name": "spare", 00:14:12.952 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:12.952 "is_configured": true, 00:14:12.952 "data_offset": 0, 00:14:12.952 "data_size": 65536 00:14:12.952 }, 00:14:12.952 { 00:14:12.952 "name": null, 00:14:12.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.952 "is_configured": false, 00:14:12.952 "data_offset": 0, 00:14:12.952 "data_size": 65536 00:14:12.952 }, 00:14:12.952 { 00:14:12.952 "name": "BaseBdev3", 00:14:12.952 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:12.952 "is_configured": true, 00:14:12.952 "data_offset": 0, 00:14:12.952 "data_size": 65536 00:14:12.952 }, 00:14:12.952 { 00:14:12.952 "name": "BaseBdev4", 00:14:12.952 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:12.952 "is_configured": true, 00:14:12.952 "data_offset": 0, 00:14:12.952 "data_size": 65536 00:14:12.952 } 00:14:12.952 ] 00:14:12.952 }' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.952 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=435 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.212 "name": "raid_bdev1", 00:14:13.212 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:13.212 "strip_size_kb": 0, 00:14:13.212 "state": "online", 00:14:13.212 "raid_level": "raid1", 00:14:13.212 "superblock": false, 00:14:13.212 "num_base_bdevs": 4, 00:14:13.212 "num_base_bdevs_discovered": 3, 00:14:13.212 "num_base_bdevs_operational": 3, 00:14:13.212 "process": { 00:14:13.212 "type": "rebuild", 00:14:13.212 "target": "spare", 00:14:13.212 "progress": { 00:14:13.212 "blocks": 26624, 00:14:13.212 "percent": 40 00:14:13.212 } 00:14:13.212 }, 00:14:13.212 "base_bdevs_list": [ 00:14:13.212 { 00:14:13.212 "name": "spare", 00:14:13.212 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:13.212 "is_configured": true, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 }, 00:14:13.212 { 00:14:13.212 "name": null, 00:14:13.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.212 "is_configured": false, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 }, 00:14:13.212 { 00:14:13.212 "name": "BaseBdev3", 00:14:13.212 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:13.212 "is_configured": true, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 }, 00:14:13.212 { 00:14:13.212 "name": "BaseBdev4", 00:14:13.212 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:13.212 "is_configured": true, 00:14:13.212 "data_offset": 0, 00:14:13.212 "data_size": 65536 00:14:13.212 } 00:14:13.212 ] 00:14:13.212 }' 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.212 01:34:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.150 01:34:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.410 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.410 "name": "raid_bdev1", 00:14:14.410 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:14.410 "strip_size_kb": 0, 00:14:14.410 "state": "online", 00:14:14.410 "raid_level": "raid1", 00:14:14.410 "superblock": false, 00:14:14.410 "num_base_bdevs": 4, 00:14:14.410 "num_base_bdevs_discovered": 3, 00:14:14.410 "num_base_bdevs_operational": 3, 00:14:14.410 "process": { 00:14:14.410 "type": "rebuild", 00:14:14.410 "target": "spare", 00:14:14.410 "progress": { 00:14:14.410 "blocks": 51200, 00:14:14.410 "percent": 78 00:14:14.410 } 00:14:14.410 }, 00:14:14.410 "base_bdevs_list": [ 00:14:14.410 { 00:14:14.410 "name": "spare", 00:14:14.410 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:14.410 "is_configured": true, 00:14:14.410 "data_offset": 0, 00:14:14.410 "data_size": 65536 00:14:14.410 }, 00:14:14.410 { 00:14:14.410 "name": null, 00:14:14.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.410 "is_configured": false, 00:14:14.410 "data_offset": 0, 00:14:14.410 "data_size": 65536 00:14:14.410 }, 00:14:14.410 { 00:14:14.410 "name": "BaseBdev3", 00:14:14.410 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:14.410 "is_configured": true, 00:14:14.410 "data_offset": 0, 00:14:14.410 "data_size": 65536 00:14:14.410 }, 00:14:14.410 { 00:14:14.410 "name": "BaseBdev4", 00:14:14.410 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:14.410 "is_configured": true, 00:14:14.410 "data_offset": 0, 00:14:14.410 "data_size": 65536 00:14:14.410 } 00:14:14.410 ] 00:14:14.410 }' 00:14:14.410 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.410 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.410 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.411 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.411 01:34:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.980 [2024-11-17 01:34:23.273225] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:14.980 [2024-11-17 01:34:23.273298] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:14.980 [2024-11-17 01:34:23.273343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.550 "name": "raid_bdev1", 00:14:15.550 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:15.550 "strip_size_kb": 0, 00:14:15.550 "state": "online", 00:14:15.550 "raid_level": "raid1", 00:14:15.550 "superblock": false, 00:14:15.550 "num_base_bdevs": 4, 00:14:15.550 "num_base_bdevs_discovered": 3, 00:14:15.550 "num_base_bdevs_operational": 3, 00:14:15.550 "base_bdevs_list": [ 00:14:15.550 { 00:14:15.550 "name": "spare", 00:14:15.550 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:15.550 "is_configured": true, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 }, 00:14:15.550 { 00:14:15.550 "name": null, 00:14:15.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.550 "is_configured": false, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 }, 00:14:15.550 { 00:14:15.550 "name": "BaseBdev3", 00:14:15.550 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:15.550 "is_configured": true, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 }, 00:14:15.550 { 00:14:15.550 "name": "BaseBdev4", 00:14:15.550 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:15.550 "is_configured": true, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 } 00:14:15.550 ] 00:14:15.550 }' 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.550 "name": "raid_bdev1", 00:14:15.550 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:15.550 "strip_size_kb": 0, 00:14:15.550 "state": "online", 00:14:15.550 "raid_level": "raid1", 00:14:15.550 "superblock": false, 00:14:15.550 "num_base_bdevs": 4, 00:14:15.550 "num_base_bdevs_discovered": 3, 00:14:15.550 "num_base_bdevs_operational": 3, 00:14:15.550 "base_bdevs_list": [ 00:14:15.550 { 00:14:15.550 "name": "spare", 00:14:15.550 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:15.550 "is_configured": true, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 }, 00:14:15.550 { 00:14:15.550 "name": null, 00:14:15.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.550 "is_configured": false, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 }, 00:14:15.550 { 00:14:15.550 "name": "BaseBdev3", 00:14:15.550 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:15.550 "is_configured": true, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 }, 00:14:15.550 { 00:14:15.550 "name": "BaseBdev4", 00:14:15.550 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:15.550 "is_configured": true, 00:14:15.550 "data_offset": 0, 00:14:15.550 "data_size": 65536 00:14:15.550 } 00:14:15.550 ] 00:14:15.550 }' 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.550 01:34:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.811 "name": "raid_bdev1", 00:14:15.811 "uuid": "f96c1a24-5283-45ab-8e65-345e4ec27f07", 00:14:15.811 "strip_size_kb": 0, 00:14:15.811 "state": "online", 00:14:15.811 "raid_level": "raid1", 00:14:15.811 "superblock": false, 00:14:15.811 "num_base_bdevs": 4, 00:14:15.811 "num_base_bdevs_discovered": 3, 00:14:15.811 "num_base_bdevs_operational": 3, 00:14:15.811 "base_bdevs_list": [ 00:14:15.811 { 00:14:15.811 "name": "spare", 00:14:15.811 "uuid": "638892e0-3738-5c6e-9857-7cc193d0b298", 00:14:15.811 "is_configured": true, 00:14:15.811 "data_offset": 0, 00:14:15.811 "data_size": 65536 00:14:15.811 }, 00:14:15.811 { 00:14:15.811 "name": null, 00:14:15.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.811 "is_configured": false, 00:14:15.811 "data_offset": 0, 00:14:15.811 "data_size": 65536 00:14:15.811 }, 00:14:15.811 { 00:14:15.811 "name": "BaseBdev3", 00:14:15.811 "uuid": "bd24a06d-b815-53dd-a703-4f25752ef766", 00:14:15.811 "is_configured": true, 00:14:15.811 "data_offset": 0, 00:14:15.811 "data_size": 65536 00:14:15.811 }, 00:14:15.811 { 00:14:15.811 "name": "BaseBdev4", 00:14:15.811 "uuid": "fa28693b-55af-5961-aa81-99228f8193c3", 00:14:15.811 "is_configured": true, 00:14:15.811 "data_offset": 0, 00:14:15.811 "data_size": 65536 00:14:15.811 } 00:14:15.811 ] 00:14:15.811 }' 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.811 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.071 [2024-11-17 01:34:24.457178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:16.071 [2024-11-17 01:34:24.457252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.071 [2024-11-17 01:34:24.457368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.071 [2024-11-17 01:34:24.457489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.071 [2024-11-17 01:34:24.457547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.071 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:16.331 /dev/nbd0 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.331 1+0 records in 00:14:16.331 1+0 records out 00:14:16.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346606 s, 11.8 MB/s 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.331 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:16.591 /dev/nbd1 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.591 01:34:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.591 1+0 records in 00:14:16.591 1+0 records out 00:14:16.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464754 s, 8.8 MB/s 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:16.591 01:34:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:16.851 01:34:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:16.851 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.851 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:16.851 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.851 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:16.851 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.851 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.110 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77314 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77314 ']' 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77314 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77314 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77314' 00:14:17.371 killing process with pid 77314 00:14:17.371 Received shutdown signal, test time was about 60.000000 seconds 00:14:17.371 00:14:17.371 Latency(us) 00:14:17.371 [2024-11-17T01:34:25.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.371 [2024-11-17T01:34:25.831Z] =================================================================================================================== 00:14:17.371 [2024-11-17T01:34:25.831Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77314 00:14:17.371 [2024-11-17 01:34:25.626110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.371 01:34:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77314 00:14:17.631 [2024-11-17 01:34:26.080554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:19.020 00:14:19.020 real 0m17.101s 00:14:19.020 user 0m18.868s 00:14:19.020 sys 0m3.135s 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.020 ************************************ 00:14:19.020 END TEST raid_rebuild_test 00:14:19.020 ************************************ 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.020 01:34:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:19.020 01:34:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:19.020 01:34:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.020 01:34:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.020 ************************************ 00:14:19.020 START TEST raid_rebuild_test_sb 00:14:19.020 ************************************ 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77750 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77750 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77750 ']' 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:19.020 01:34:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.020 [2024-11-17 01:34:27.288691] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:19.020 [2024-11-17 01:34:27.288892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77750 ] 00:14:19.020 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.020 Zero copy mechanism will not be used. 00:14:19.020 [2024-11-17 01:34:27.462665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.279 [2024-11-17 01:34:27.568226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.539 [2024-11-17 01:34:27.767750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.539 [2024-11-17 01:34:27.767845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.797 BaseBdev1_malloc 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.797 [2024-11-17 01:34:28.158278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.797 [2024-11-17 01:34:28.158430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.797 [2024-11-17 01:34:28.158473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.797 [2024-11-17 01:34:28.158504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.797 [2024-11-17 01:34:28.160629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.797 [2024-11-17 01:34:28.160702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.797 BaseBdev1 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.797 BaseBdev2_malloc 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.797 [2024-11-17 01:34:28.211078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.797 [2024-11-17 01:34:28.211143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.797 [2024-11-17 01:34:28.211161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:19.797 [2024-11-17 01:34:28.211174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.797 [2024-11-17 01:34:28.213245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.797 [2024-11-17 01:34:28.213284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.797 BaseBdev2 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.797 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.057 BaseBdev3_malloc 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.057 [2024-11-17 01:34:28.301860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:20.057 [2024-11-17 01:34:28.301964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.057 [2024-11-17 01:34:28.302003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:20.057 [2024-11-17 01:34:28.302033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.057 [2024-11-17 01:34:28.304078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.057 [2024-11-17 01:34:28.304156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:20.057 BaseBdev3 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.057 BaseBdev4_malloc 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.057 [2024-11-17 01:34:28.353650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:20.057 [2024-11-17 01:34:28.353699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.057 [2024-11-17 01:34:28.353732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:20.057 [2024-11-17 01:34:28.353742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.057 [2024-11-17 01:34:28.355732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.057 [2024-11-17 01:34:28.355781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:20.057 BaseBdev4 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.057 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.058 spare_malloc 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.058 spare_delay 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.058 [2024-11-17 01:34:28.420171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:20.058 [2024-11-17 01:34:28.420268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.058 [2024-11-17 01:34:28.420304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:20.058 [2024-11-17 01:34:28.420333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.058 [2024-11-17 01:34:28.422296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.058 [2024-11-17 01:34:28.422382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:20.058 spare 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.058 [2024-11-17 01:34:28.432205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.058 [2024-11-17 01:34:28.433968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.058 [2024-11-17 01:34:28.434029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.058 [2024-11-17 01:34:28.434075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:20.058 [2024-11-17 01:34:28.434233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:20.058 [2024-11-17 01:34:28.434249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.058 [2024-11-17 01:34:28.434459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:20.058 [2024-11-17 01:34:28.434622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:20.058 [2024-11-17 01:34:28.434631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:20.058 [2024-11-17 01:34:28.434778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.058 "name": "raid_bdev1", 00:14:20.058 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:20.058 "strip_size_kb": 0, 00:14:20.058 "state": "online", 00:14:20.058 "raid_level": "raid1", 00:14:20.058 "superblock": true, 00:14:20.058 "num_base_bdevs": 4, 00:14:20.058 "num_base_bdevs_discovered": 4, 00:14:20.058 "num_base_bdevs_operational": 4, 00:14:20.058 "base_bdevs_list": [ 00:14:20.058 { 00:14:20.058 "name": "BaseBdev1", 00:14:20.058 "uuid": "8a52790a-2f0d-5cb5-8121-b1e816bfd294", 00:14:20.058 "is_configured": true, 00:14:20.058 "data_offset": 2048, 00:14:20.058 "data_size": 63488 00:14:20.058 }, 00:14:20.058 { 00:14:20.058 "name": "BaseBdev2", 00:14:20.058 "uuid": "82945695-9bac-53bb-87da-63998bc2891b", 00:14:20.058 "is_configured": true, 00:14:20.058 "data_offset": 2048, 00:14:20.058 "data_size": 63488 00:14:20.058 }, 00:14:20.058 { 00:14:20.058 "name": "BaseBdev3", 00:14:20.058 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:20.058 "is_configured": true, 00:14:20.058 "data_offset": 2048, 00:14:20.058 "data_size": 63488 00:14:20.058 }, 00:14:20.058 { 00:14:20.058 "name": "BaseBdev4", 00:14:20.058 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:20.058 "is_configured": true, 00:14:20.058 "data_offset": 2048, 00:14:20.058 "data_size": 63488 00:14:20.058 } 00:14:20.058 ] 00:14:20.058 }' 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.058 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.627 [2024-11-17 01:34:28.859902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.627 01:34:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:20.885 [2024-11-17 01:34:29.135126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:20.885 /dev/nbd0 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.885 1+0 records in 00:14:20.885 1+0 records out 00:14:20.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420895 s, 9.7 MB/s 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:20.885 01:34:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:26.160 63488+0 records in 00:14:26.160 63488+0 records out 00:14:26.160 32505856 bytes (33 MB, 31 MiB) copied, 4.82602 s, 6.7 MB/s 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.160 [2024-11-17 01:34:34.218521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.160 [2024-11-17 01:34:34.254861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.160 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.160 "name": "raid_bdev1", 00:14:26.160 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:26.160 "strip_size_kb": 0, 00:14:26.160 "state": "online", 00:14:26.160 "raid_level": "raid1", 00:14:26.160 "superblock": true, 00:14:26.160 "num_base_bdevs": 4, 00:14:26.160 "num_base_bdevs_discovered": 3, 00:14:26.160 "num_base_bdevs_operational": 3, 00:14:26.160 "base_bdevs_list": [ 00:14:26.160 { 00:14:26.160 "name": null, 00:14:26.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.160 "is_configured": false, 00:14:26.160 "data_offset": 0, 00:14:26.160 "data_size": 63488 00:14:26.160 }, 00:14:26.160 { 00:14:26.160 "name": "BaseBdev2", 00:14:26.160 "uuid": "82945695-9bac-53bb-87da-63998bc2891b", 00:14:26.160 "is_configured": true, 00:14:26.160 "data_offset": 2048, 00:14:26.161 "data_size": 63488 00:14:26.161 }, 00:14:26.161 { 00:14:26.161 "name": "BaseBdev3", 00:14:26.161 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:26.161 "is_configured": true, 00:14:26.161 "data_offset": 2048, 00:14:26.161 "data_size": 63488 00:14:26.161 }, 00:14:26.161 { 00:14:26.161 "name": "BaseBdev4", 00:14:26.161 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:26.161 "is_configured": true, 00:14:26.161 "data_offset": 2048, 00:14:26.161 "data_size": 63488 00:14:26.161 } 00:14:26.161 ] 00:14:26.161 }' 00:14:26.161 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.161 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.420 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.420 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.420 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.420 [2024-11-17 01:34:34.714078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.420 [2024-11-17 01:34:34.727919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:26.420 01:34:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.420 01:34:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.420 [2024-11-17 01:34:34.729710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.358 "name": "raid_bdev1", 00:14:27.358 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:27.358 "strip_size_kb": 0, 00:14:27.358 "state": "online", 00:14:27.358 "raid_level": "raid1", 00:14:27.358 "superblock": true, 00:14:27.358 "num_base_bdevs": 4, 00:14:27.358 "num_base_bdevs_discovered": 4, 00:14:27.358 "num_base_bdevs_operational": 4, 00:14:27.358 "process": { 00:14:27.358 "type": "rebuild", 00:14:27.358 "target": "spare", 00:14:27.358 "progress": { 00:14:27.358 "blocks": 20480, 00:14:27.358 "percent": 32 00:14:27.358 } 00:14:27.358 }, 00:14:27.358 "base_bdevs_list": [ 00:14:27.358 { 00:14:27.358 "name": "spare", 00:14:27.358 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:27.358 "is_configured": true, 00:14:27.358 "data_offset": 2048, 00:14:27.358 "data_size": 63488 00:14:27.358 }, 00:14:27.358 { 00:14:27.358 "name": "BaseBdev2", 00:14:27.358 "uuid": "82945695-9bac-53bb-87da-63998bc2891b", 00:14:27.358 "is_configured": true, 00:14:27.358 "data_offset": 2048, 00:14:27.358 "data_size": 63488 00:14:27.358 }, 00:14:27.358 { 00:14:27.358 "name": "BaseBdev3", 00:14:27.358 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:27.358 "is_configured": true, 00:14:27.358 "data_offset": 2048, 00:14:27.358 "data_size": 63488 00:14:27.358 }, 00:14:27.358 { 00:14:27.358 "name": "BaseBdev4", 00:14:27.358 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:27.358 "is_configured": true, 00:14:27.358 "data_offset": 2048, 00:14:27.358 "data_size": 63488 00:14:27.358 } 00:14:27.358 ] 00:14:27.358 }' 00:14:27.358 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.617 [2024-11-17 01:34:35.893006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.617 [2024-11-17 01:34:35.934506] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.617 [2024-11-17 01:34:35.934627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.617 [2024-11-17 01:34:35.934665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.617 [2024-11-17 01:34:35.934688] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.617 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.618 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.618 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.618 01:34:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.618 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.618 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.618 01:34:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.618 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.618 "name": "raid_bdev1", 00:14:27.618 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:27.618 "strip_size_kb": 0, 00:14:27.618 "state": "online", 00:14:27.618 "raid_level": "raid1", 00:14:27.618 "superblock": true, 00:14:27.618 "num_base_bdevs": 4, 00:14:27.618 "num_base_bdevs_discovered": 3, 00:14:27.618 "num_base_bdevs_operational": 3, 00:14:27.618 "base_bdevs_list": [ 00:14:27.618 { 00:14:27.618 "name": null, 00:14:27.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.618 "is_configured": false, 00:14:27.618 "data_offset": 0, 00:14:27.618 "data_size": 63488 00:14:27.618 }, 00:14:27.618 { 00:14:27.618 "name": "BaseBdev2", 00:14:27.618 "uuid": "82945695-9bac-53bb-87da-63998bc2891b", 00:14:27.618 "is_configured": true, 00:14:27.618 "data_offset": 2048, 00:14:27.618 "data_size": 63488 00:14:27.618 }, 00:14:27.618 { 00:14:27.618 "name": "BaseBdev3", 00:14:27.618 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:27.618 "is_configured": true, 00:14:27.618 "data_offset": 2048, 00:14:27.618 "data_size": 63488 00:14:27.618 }, 00:14:27.618 { 00:14:27.618 "name": "BaseBdev4", 00:14:27.618 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:27.618 "is_configured": true, 00:14:27.618 "data_offset": 2048, 00:14:27.618 "data_size": 63488 00:14:27.618 } 00:14:27.618 ] 00:14:27.618 }' 00:14:27.618 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.618 01:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.186 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.186 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.186 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.186 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.186 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.186 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.187 "name": "raid_bdev1", 00:14:28.187 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:28.187 "strip_size_kb": 0, 00:14:28.187 "state": "online", 00:14:28.187 "raid_level": "raid1", 00:14:28.187 "superblock": true, 00:14:28.187 "num_base_bdevs": 4, 00:14:28.187 "num_base_bdevs_discovered": 3, 00:14:28.187 "num_base_bdevs_operational": 3, 00:14:28.187 "base_bdevs_list": [ 00:14:28.187 { 00:14:28.187 "name": null, 00:14:28.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.187 "is_configured": false, 00:14:28.187 "data_offset": 0, 00:14:28.187 "data_size": 63488 00:14:28.187 }, 00:14:28.187 { 00:14:28.187 "name": "BaseBdev2", 00:14:28.187 "uuid": "82945695-9bac-53bb-87da-63998bc2891b", 00:14:28.187 "is_configured": true, 00:14:28.187 "data_offset": 2048, 00:14:28.187 "data_size": 63488 00:14:28.187 }, 00:14:28.187 { 00:14:28.187 "name": "BaseBdev3", 00:14:28.187 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:28.187 "is_configured": true, 00:14:28.187 "data_offset": 2048, 00:14:28.187 "data_size": 63488 00:14:28.187 }, 00:14:28.187 { 00:14:28.187 "name": "BaseBdev4", 00:14:28.187 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:28.187 "is_configured": true, 00:14:28.187 "data_offset": 2048, 00:14:28.187 "data_size": 63488 00:14:28.187 } 00:14:28.187 ] 00:14:28.187 }' 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.187 [2024-11-17 01:34:36.557946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.187 [2024-11-17 01:34:36.571416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.187 01:34:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.187 [2024-11-17 01:34:36.573263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.131 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.390 "name": "raid_bdev1", 00:14:29.390 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:29.390 "strip_size_kb": 0, 00:14:29.390 "state": "online", 00:14:29.390 "raid_level": "raid1", 00:14:29.390 "superblock": true, 00:14:29.390 "num_base_bdevs": 4, 00:14:29.390 "num_base_bdevs_discovered": 4, 00:14:29.390 "num_base_bdevs_operational": 4, 00:14:29.390 "process": { 00:14:29.390 "type": "rebuild", 00:14:29.390 "target": "spare", 00:14:29.390 "progress": { 00:14:29.390 "blocks": 20480, 00:14:29.390 "percent": 32 00:14:29.390 } 00:14:29.390 }, 00:14:29.390 "base_bdevs_list": [ 00:14:29.390 { 00:14:29.390 "name": "spare", 00:14:29.390 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:29.390 "is_configured": true, 00:14:29.390 "data_offset": 2048, 00:14:29.390 "data_size": 63488 00:14:29.390 }, 00:14:29.390 { 00:14:29.390 "name": "BaseBdev2", 00:14:29.390 "uuid": "82945695-9bac-53bb-87da-63998bc2891b", 00:14:29.390 "is_configured": true, 00:14:29.390 "data_offset": 2048, 00:14:29.390 "data_size": 63488 00:14:29.390 }, 00:14:29.390 { 00:14:29.390 "name": "BaseBdev3", 00:14:29.390 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:29.390 "is_configured": true, 00:14:29.390 "data_offset": 2048, 00:14:29.390 "data_size": 63488 00:14:29.390 }, 00:14:29.390 { 00:14:29.390 "name": "BaseBdev4", 00:14:29.390 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:29.390 "is_configured": true, 00:14:29.390 "data_offset": 2048, 00:14:29.390 "data_size": 63488 00:14:29.390 } 00:14:29.390 ] 00:14:29.390 }' 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:29.390 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.390 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.390 [2024-11-17 01:34:37.736888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.650 [2024-11-17 01:34:37.878227] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.650 "name": "raid_bdev1", 00:14:29.650 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:29.650 "strip_size_kb": 0, 00:14:29.650 "state": "online", 00:14:29.650 "raid_level": "raid1", 00:14:29.650 "superblock": true, 00:14:29.650 "num_base_bdevs": 4, 00:14:29.650 "num_base_bdevs_discovered": 3, 00:14:29.650 "num_base_bdevs_operational": 3, 00:14:29.650 "process": { 00:14:29.650 "type": "rebuild", 00:14:29.650 "target": "spare", 00:14:29.650 "progress": { 00:14:29.650 "blocks": 24576, 00:14:29.650 "percent": 38 00:14:29.650 } 00:14:29.650 }, 00:14:29.650 "base_bdevs_list": [ 00:14:29.650 { 00:14:29.650 "name": "spare", 00:14:29.650 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:29.650 "is_configured": true, 00:14:29.650 "data_offset": 2048, 00:14:29.650 "data_size": 63488 00:14:29.650 }, 00:14:29.650 { 00:14:29.650 "name": null, 00:14:29.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.650 "is_configured": false, 00:14:29.650 "data_offset": 0, 00:14:29.650 "data_size": 63488 00:14:29.650 }, 00:14:29.650 { 00:14:29.650 "name": "BaseBdev3", 00:14:29.650 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:29.650 "is_configured": true, 00:14:29.650 "data_offset": 2048, 00:14:29.650 "data_size": 63488 00:14:29.650 }, 00:14:29.650 { 00:14:29.650 "name": "BaseBdev4", 00:14:29.650 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:29.650 "is_configured": true, 00:14:29.650 "data_offset": 2048, 00:14:29.650 "data_size": 63488 00:14:29.650 } 00:14:29.650 ] 00:14:29.650 }' 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.650 01:34:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.650 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.650 "name": "raid_bdev1", 00:14:29.650 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:29.650 "strip_size_kb": 0, 00:14:29.650 "state": "online", 00:14:29.650 "raid_level": "raid1", 00:14:29.650 "superblock": true, 00:14:29.650 "num_base_bdevs": 4, 00:14:29.650 "num_base_bdevs_discovered": 3, 00:14:29.650 "num_base_bdevs_operational": 3, 00:14:29.650 "process": { 00:14:29.650 "type": "rebuild", 00:14:29.650 "target": "spare", 00:14:29.650 "progress": { 00:14:29.650 "blocks": 26624, 00:14:29.650 "percent": 41 00:14:29.650 } 00:14:29.650 }, 00:14:29.650 "base_bdevs_list": [ 00:14:29.650 { 00:14:29.650 "name": "spare", 00:14:29.650 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:29.650 "is_configured": true, 00:14:29.650 "data_offset": 2048, 00:14:29.650 "data_size": 63488 00:14:29.650 }, 00:14:29.650 { 00:14:29.650 "name": null, 00:14:29.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.650 "is_configured": false, 00:14:29.650 "data_offset": 0, 00:14:29.650 "data_size": 63488 00:14:29.650 }, 00:14:29.650 { 00:14:29.650 "name": "BaseBdev3", 00:14:29.650 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:29.650 "is_configured": true, 00:14:29.650 "data_offset": 2048, 00:14:29.650 "data_size": 63488 00:14:29.650 }, 00:14:29.650 { 00:14:29.650 "name": "BaseBdev4", 00:14:29.650 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:29.650 "is_configured": true, 00:14:29.650 "data_offset": 2048, 00:14:29.650 "data_size": 63488 00:14:29.651 } 00:14:29.651 ] 00:14:29.651 }' 00:14:29.651 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.651 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.651 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.910 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.910 01:34:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.847 "name": "raid_bdev1", 00:14:30.847 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:30.847 "strip_size_kb": 0, 00:14:30.847 "state": "online", 00:14:30.847 "raid_level": "raid1", 00:14:30.847 "superblock": true, 00:14:30.847 "num_base_bdevs": 4, 00:14:30.847 "num_base_bdevs_discovered": 3, 00:14:30.847 "num_base_bdevs_operational": 3, 00:14:30.847 "process": { 00:14:30.847 "type": "rebuild", 00:14:30.847 "target": "spare", 00:14:30.847 "progress": { 00:14:30.847 "blocks": 49152, 00:14:30.847 "percent": 77 00:14:30.847 } 00:14:30.847 }, 00:14:30.847 "base_bdevs_list": [ 00:14:30.847 { 00:14:30.847 "name": "spare", 00:14:30.847 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:30.847 "is_configured": true, 00:14:30.847 "data_offset": 2048, 00:14:30.847 "data_size": 63488 00:14:30.847 }, 00:14:30.847 { 00:14:30.847 "name": null, 00:14:30.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.847 "is_configured": false, 00:14:30.847 "data_offset": 0, 00:14:30.847 "data_size": 63488 00:14:30.847 }, 00:14:30.847 { 00:14:30.847 "name": "BaseBdev3", 00:14:30.847 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:30.847 "is_configured": true, 00:14:30.847 "data_offset": 2048, 00:14:30.847 "data_size": 63488 00:14:30.847 }, 00:14:30.847 { 00:14:30.847 "name": "BaseBdev4", 00:14:30.847 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:30.847 "is_configured": true, 00:14:30.847 "data_offset": 2048, 00:14:30.847 "data_size": 63488 00:14:30.847 } 00:14:30.847 ] 00:14:30.847 }' 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.847 01:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.415 [2024-11-17 01:34:39.785440] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:31.415 [2024-11-17 01:34:39.785588] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:31.415 [2024-11-17 01:34:39.785733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.983 "name": "raid_bdev1", 00:14:31.983 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:31.983 "strip_size_kb": 0, 00:14:31.983 "state": "online", 00:14:31.983 "raid_level": "raid1", 00:14:31.983 "superblock": true, 00:14:31.983 "num_base_bdevs": 4, 00:14:31.983 "num_base_bdevs_discovered": 3, 00:14:31.983 "num_base_bdevs_operational": 3, 00:14:31.983 "base_bdevs_list": [ 00:14:31.983 { 00:14:31.983 "name": "spare", 00:14:31.983 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:31.983 "is_configured": true, 00:14:31.983 "data_offset": 2048, 00:14:31.983 "data_size": 63488 00:14:31.983 }, 00:14:31.983 { 00:14:31.983 "name": null, 00:14:31.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.983 "is_configured": false, 00:14:31.983 "data_offset": 0, 00:14:31.983 "data_size": 63488 00:14:31.983 }, 00:14:31.983 { 00:14:31.983 "name": "BaseBdev3", 00:14:31.983 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:31.983 "is_configured": true, 00:14:31.983 "data_offset": 2048, 00:14:31.983 "data_size": 63488 00:14:31.983 }, 00:14:31.983 { 00:14:31.983 "name": "BaseBdev4", 00:14:31.983 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:31.983 "is_configured": true, 00:14:31.983 "data_offset": 2048, 00:14:31.983 "data_size": 63488 00:14:31.983 } 00:14:31.983 ] 00:14:31.983 }' 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.983 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.984 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.984 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.243 "name": "raid_bdev1", 00:14:32.243 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:32.243 "strip_size_kb": 0, 00:14:32.243 "state": "online", 00:14:32.243 "raid_level": "raid1", 00:14:32.243 "superblock": true, 00:14:32.243 "num_base_bdevs": 4, 00:14:32.243 "num_base_bdevs_discovered": 3, 00:14:32.243 "num_base_bdevs_operational": 3, 00:14:32.243 "base_bdevs_list": [ 00:14:32.243 { 00:14:32.243 "name": "spare", 00:14:32.243 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:32.243 "is_configured": true, 00:14:32.243 "data_offset": 2048, 00:14:32.243 "data_size": 63488 00:14:32.243 }, 00:14:32.243 { 00:14:32.243 "name": null, 00:14:32.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.243 "is_configured": false, 00:14:32.243 "data_offset": 0, 00:14:32.243 "data_size": 63488 00:14:32.243 }, 00:14:32.243 { 00:14:32.243 "name": "BaseBdev3", 00:14:32.243 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:32.243 "is_configured": true, 00:14:32.243 "data_offset": 2048, 00:14:32.243 "data_size": 63488 00:14:32.243 }, 00:14:32.243 { 00:14:32.243 "name": "BaseBdev4", 00:14:32.243 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:32.243 "is_configured": true, 00:14:32.243 "data_offset": 2048, 00:14:32.243 "data_size": 63488 00:14:32.243 } 00:14:32.243 ] 00:14:32.243 }' 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.243 "name": "raid_bdev1", 00:14:32.243 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:32.243 "strip_size_kb": 0, 00:14:32.243 "state": "online", 00:14:32.243 "raid_level": "raid1", 00:14:32.243 "superblock": true, 00:14:32.243 "num_base_bdevs": 4, 00:14:32.243 "num_base_bdevs_discovered": 3, 00:14:32.243 "num_base_bdevs_operational": 3, 00:14:32.243 "base_bdevs_list": [ 00:14:32.243 { 00:14:32.243 "name": "spare", 00:14:32.243 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:32.243 "is_configured": true, 00:14:32.243 "data_offset": 2048, 00:14:32.243 "data_size": 63488 00:14:32.243 }, 00:14:32.243 { 00:14:32.243 "name": null, 00:14:32.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.243 "is_configured": false, 00:14:32.243 "data_offset": 0, 00:14:32.243 "data_size": 63488 00:14:32.243 }, 00:14:32.243 { 00:14:32.243 "name": "BaseBdev3", 00:14:32.243 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:32.243 "is_configured": true, 00:14:32.243 "data_offset": 2048, 00:14:32.243 "data_size": 63488 00:14:32.243 }, 00:14:32.243 { 00:14:32.243 "name": "BaseBdev4", 00:14:32.243 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:32.243 "is_configured": true, 00:14:32.243 "data_offset": 2048, 00:14:32.243 "data_size": 63488 00:14:32.243 } 00:14:32.243 ] 00:14:32.243 }' 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.243 01:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.812 [2024-11-17 01:34:41.007681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.812 [2024-11-17 01:34:41.007771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.812 [2024-11-17 01:34:41.007888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.812 [2024-11-17 01:34:41.007979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.812 [2024-11-17 01:34:41.008026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.812 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:33.071 /dev/nbd0 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.071 1+0 records in 00:14:33.071 1+0 records out 00:14:33.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395788 s, 10.3 MB/s 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.071 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:33.071 /dev/nbd1 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.331 1+0 records in 00:14:33.331 1+0 records out 00:14:33.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432999 s, 9.5 MB/s 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.331 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.590 01:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.849 [2024-11-17 01:34:42.173121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.849 [2024-11-17 01:34:42.173220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.849 [2024-11-17 01:34:42.173260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:33.849 [2024-11-17 01:34:42.173288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.849 [2024-11-17 01:34:42.175500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.849 [2024-11-17 01:34:42.175576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.849 [2024-11-17 01:34:42.175705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:33.849 [2024-11-17 01:34:42.175804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.849 [2024-11-17 01:34:42.176016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.849 [2024-11-17 01:34:42.176175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.849 spare 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.849 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.850 [2024-11-17 01:34:42.276126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:33.850 [2024-11-17 01:34:42.276205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.850 [2024-11-17 01:34:42.276570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:33.850 [2024-11-17 01:34:42.276822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:33.850 [2024-11-17 01:34:42.276872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:33.850 [2024-11-17 01:34:42.277116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.850 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.109 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.109 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.109 "name": "raid_bdev1", 00:14:34.109 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:34.109 "strip_size_kb": 0, 00:14:34.109 "state": "online", 00:14:34.109 "raid_level": "raid1", 00:14:34.109 "superblock": true, 00:14:34.109 "num_base_bdevs": 4, 00:14:34.109 "num_base_bdevs_discovered": 3, 00:14:34.109 "num_base_bdevs_operational": 3, 00:14:34.109 "base_bdevs_list": [ 00:14:34.109 { 00:14:34.109 "name": "spare", 00:14:34.109 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:34.109 "is_configured": true, 00:14:34.109 "data_offset": 2048, 00:14:34.109 "data_size": 63488 00:14:34.109 }, 00:14:34.109 { 00:14:34.109 "name": null, 00:14:34.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.109 "is_configured": false, 00:14:34.109 "data_offset": 2048, 00:14:34.109 "data_size": 63488 00:14:34.109 }, 00:14:34.109 { 00:14:34.109 "name": "BaseBdev3", 00:14:34.109 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:34.109 "is_configured": true, 00:14:34.109 "data_offset": 2048, 00:14:34.109 "data_size": 63488 00:14:34.109 }, 00:14:34.109 { 00:14:34.109 "name": "BaseBdev4", 00:14:34.109 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:34.109 "is_configured": true, 00:14:34.109 "data_offset": 2048, 00:14:34.109 "data_size": 63488 00:14:34.109 } 00:14:34.109 ] 00:14:34.109 }' 00:14:34.109 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.109 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.368 "name": "raid_bdev1", 00:14:34.368 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:34.368 "strip_size_kb": 0, 00:14:34.368 "state": "online", 00:14:34.368 "raid_level": "raid1", 00:14:34.368 "superblock": true, 00:14:34.368 "num_base_bdevs": 4, 00:14:34.368 "num_base_bdevs_discovered": 3, 00:14:34.368 "num_base_bdevs_operational": 3, 00:14:34.368 "base_bdevs_list": [ 00:14:34.368 { 00:14:34.368 "name": "spare", 00:14:34.368 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:34.368 "is_configured": true, 00:14:34.368 "data_offset": 2048, 00:14:34.368 "data_size": 63488 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": null, 00:14:34.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.368 "is_configured": false, 00:14:34.368 "data_offset": 2048, 00:14:34.368 "data_size": 63488 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "BaseBdev3", 00:14:34.368 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:34.368 "is_configured": true, 00:14:34.368 "data_offset": 2048, 00:14:34.368 "data_size": 63488 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "BaseBdev4", 00:14:34.368 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:34.368 "is_configured": true, 00:14:34.368 "data_offset": 2048, 00:14:34.368 "data_size": 63488 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 }' 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.368 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.629 [2024-11-17 01:34:42.860037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.629 "name": "raid_bdev1", 00:14:34.629 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:34.629 "strip_size_kb": 0, 00:14:34.629 "state": "online", 00:14:34.629 "raid_level": "raid1", 00:14:34.629 "superblock": true, 00:14:34.629 "num_base_bdevs": 4, 00:14:34.629 "num_base_bdevs_discovered": 2, 00:14:34.629 "num_base_bdevs_operational": 2, 00:14:34.629 "base_bdevs_list": [ 00:14:34.629 { 00:14:34.629 "name": null, 00:14:34.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.629 "is_configured": false, 00:14:34.629 "data_offset": 0, 00:14:34.629 "data_size": 63488 00:14:34.629 }, 00:14:34.629 { 00:14:34.629 "name": null, 00:14:34.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.629 "is_configured": false, 00:14:34.629 "data_offset": 2048, 00:14:34.629 "data_size": 63488 00:14:34.629 }, 00:14:34.629 { 00:14:34.629 "name": "BaseBdev3", 00:14:34.629 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:34.629 "is_configured": true, 00:14:34.629 "data_offset": 2048, 00:14:34.629 "data_size": 63488 00:14:34.629 }, 00:14:34.629 { 00:14:34.629 "name": "BaseBdev4", 00:14:34.629 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:34.629 "is_configured": true, 00:14:34.629 "data_offset": 2048, 00:14:34.629 "data_size": 63488 00:14:34.629 } 00:14:34.629 ] 00:14:34.629 }' 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.629 01:34:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.889 01:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.889 01:34:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.889 01:34:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.889 [2024-11-17 01:34:43.263400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.889 [2024-11-17 01:34:43.263614] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:34.889 [2024-11-17 01:34:43.263674] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.889 [2024-11-17 01:34:43.263747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.889 [2024-11-17 01:34:43.277513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:34.889 01:34:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.889 01:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:34.889 [2024-11-17 01:34:43.279329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.863 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.122 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.122 "name": "raid_bdev1", 00:14:36.122 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:36.122 "strip_size_kb": 0, 00:14:36.122 "state": "online", 00:14:36.122 "raid_level": "raid1", 00:14:36.122 "superblock": true, 00:14:36.122 "num_base_bdevs": 4, 00:14:36.122 "num_base_bdevs_discovered": 3, 00:14:36.122 "num_base_bdevs_operational": 3, 00:14:36.122 "process": { 00:14:36.122 "type": "rebuild", 00:14:36.122 "target": "spare", 00:14:36.122 "progress": { 00:14:36.122 "blocks": 20480, 00:14:36.122 "percent": 32 00:14:36.122 } 00:14:36.122 }, 00:14:36.122 "base_bdevs_list": [ 00:14:36.122 { 00:14:36.122 "name": "spare", 00:14:36.122 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:36.122 "is_configured": true, 00:14:36.122 "data_offset": 2048, 00:14:36.122 "data_size": 63488 00:14:36.122 }, 00:14:36.122 { 00:14:36.122 "name": null, 00:14:36.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.122 "is_configured": false, 00:14:36.122 "data_offset": 2048, 00:14:36.122 "data_size": 63488 00:14:36.122 }, 00:14:36.122 { 00:14:36.122 "name": "BaseBdev3", 00:14:36.122 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:36.122 "is_configured": true, 00:14:36.122 "data_offset": 2048, 00:14:36.122 "data_size": 63488 00:14:36.122 }, 00:14:36.122 { 00:14:36.122 "name": "BaseBdev4", 00:14:36.122 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:36.122 "is_configured": true, 00:14:36.122 "data_offset": 2048, 00:14:36.122 "data_size": 63488 00:14:36.122 } 00:14:36.122 ] 00:14:36.122 }' 00:14:36.122 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.123 [2024-11-17 01:34:44.415285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.123 [2024-11-17 01:34:44.483985] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:36.123 [2024-11-17 01:34:44.484104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.123 [2024-11-17 01:34:44.484144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:36.123 [2024-11-17 01:34:44.484166] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.123 "name": "raid_bdev1", 00:14:36.123 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:36.123 "strip_size_kb": 0, 00:14:36.123 "state": "online", 00:14:36.123 "raid_level": "raid1", 00:14:36.123 "superblock": true, 00:14:36.123 "num_base_bdevs": 4, 00:14:36.123 "num_base_bdevs_discovered": 2, 00:14:36.123 "num_base_bdevs_operational": 2, 00:14:36.123 "base_bdevs_list": [ 00:14:36.123 { 00:14:36.123 "name": null, 00:14:36.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.123 "is_configured": false, 00:14:36.123 "data_offset": 0, 00:14:36.123 "data_size": 63488 00:14:36.123 }, 00:14:36.123 { 00:14:36.123 "name": null, 00:14:36.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.123 "is_configured": false, 00:14:36.123 "data_offset": 2048, 00:14:36.123 "data_size": 63488 00:14:36.123 }, 00:14:36.123 { 00:14:36.123 "name": "BaseBdev3", 00:14:36.123 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:36.123 "is_configured": true, 00:14:36.123 "data_offset": 2048, 00:14:36.123 "data_size": 63488 00:14:36.123 }, 00:14:36.123 { 00:14:36.123 "name": "BaseBdev4", 00:14:36.123 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:36.123 "is_configured": true, 00:14:36.123 "data_offset": 2048, 00:14:36.123 "data_size": 63488 00:14:36.123 } 00:14:36.123 ] 00:14:36.123 }' 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.123 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.692 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.693 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.693 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.693 [2024-11-17 01:34:44.937360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.693 [2024-11-17 01:34:44.937485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.693 [2024-11-17 01:34:44.937528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:36.693 [2024-11-17 01:34:44.937556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.693 [2024-11-17 01:34:44.938052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.693 [2024-11-17 01:34:44.938113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.693 [2024-11-17 01:34:44.938235] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:36.693 [2024-11-17 01:34:44.938275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:36.693 [2024-11-17 01:34:44.938327] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:36.693 [2024-11-17 01:34:44.938398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.693 [2024-11-17 01:34:44.952015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:36.693 spare 00:14:36.693 01:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.693 01:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:36.693 [2024-11-17 01:34:44.953837] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.630 01:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.630 "name": "raid_bdev1", 00:14:37.630 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:37.630 "strip_size_kb": 0, 00:14:37.630 "state": "online", 00:14:37.630 "raid_level": "raid1", 00:14:37.630 "superblock": true, 00:14:37.630 "num_base_bdevs": 4, 00:14:37.630 "num_base_bdevs_discovered": 3, 00:14:37.630 "num_base_bdevs_operational": 3, 00:14:37.630 "process": { 00:14:37.630 "type": "rebuild", 00:14:37.630 "target": "spare", 00:14:37.630 "progress": { 00:14:37.630 "blocks": 20480, 00:14:37.630 "percent": 32 00:14:37.630 } 00:14:37.630 }, 00:14:37.630 "base_bdevs_list": [ 00:14:37.630 { 00:14:37.630 "name": "spare", 00:14:37.630 "uuid": "97730067-c4b1-58e0-ac03-8f13aea906a2", 00:14:37.630 "is_configured": true, 00:14:37.630 "data_offset": 2048, 00:14:37.630 "data_size": 63488 00:14:37.630 }, 00:14:37.630 { 00:14:37.630 "name": null, 00:14:37.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.630 "is_configured": false, 00:14:37.630 "data_offset": 2048, 00:14:37.630 "data_size": 63488 00:14:37.630 }, 00:14:37.630 { 00:14:37.630 "name": "BaseBdev3", 00:14:37.630 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:37.630 "is_configured": true, 00:14:37.630 "data_offset": 2048, 00:14:37.630 "data_size": 63488 00:14:37.630 }, 00:14:37.630 { 00:14:37.630 "name": "BaseBdev4", 00:14:37.630 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:37.630 "is_configured": true, 00:14:37.630 "data_offset": 2048, 00:14:37.630 "data_size": 63488 00:14:37.630 } 00:14:37.630 ] 00:14:37.630 }' 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.630 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.890 [2024-11-17 01:34:46.089646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.890 [2024-11-17 01:34:46.158520] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.890 [2024-11-17 01:34:46.158622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.890 [2024-11-17 01:34:46.158672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.890 [2024-11-17 01:34:46.158694] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.890 "name": "raid_bdev1", 00:14:37.890 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:37.890 "strip_size_kb": 0, 00:14:37.890 "state": "online", 00:14:37.890 "raid_level": "raid1", 00:14:37.890 "superblock": true, 00:14:37.890 "num_base_bdevs": 4, 00:14:37.890 "num_base_bdevs_discovered": 2, 00:14:37.890 "num_base_bdevs_operational": 2, 00:14:37.890 "base_bdevs_list": [ 00:14:37.890 { 00:14:37.890 "name": null, 00:14:37.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.890 "is_configured": false, 00:14:37.890 "data_offset": 0, 00:14:37.890 "data_size": 63488 00:14:37.890 }, 00:14:37.890 { 00:14:37.890 "name": null, 00:14:37.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.890 "is_configured": false, 00:14:37.890 "data_offset": 2048, 00:14:37.890 "data_size": 63488 00:14:37.890 }, 00:14:37.890 { 00:14:37.890 "name": "BaseBdev3", 00:14:37.890 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:37.890 "is_configured": true, 00:14:37.890 "data_offset": 2048, 00:14:37.890 "data_size": 63488 00:14:37.890 }, 00:14:37.890 { 00:14:37.890 "name": "BaseBdev4", 00:14:37.890 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:37.890 "is_configured": true, 00:14:37.890 "data_offset": 2048, 00:14:37.890 "data_size": 63488 00:14:37.890 } 00:14:37.890 ] 00:14:37.890 }' 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.890 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.459 "name": "raid_bdev1", 00:14:38.459 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:38.459 "strip_size_kb": 0, 00:14:38.459 "state": "online", 00:14:38.459 "raid_level": "raid1", 00:14:38.459 "superblock": true, 00:14:38.459 "num_base_bdevs": 4, 00:14:38.459 "num_base_bdevs_discovered": 2, 00:14:38.459 "num_base_bdevs_operational": 2, 00:14:38.459 "base_bdevs_list": [ 00:14:38.459 { 00:14:38.459 "name": null, 00:14:38.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.459 "is_configured": false, 00:14:38.459 "data_offset": 0, 00:14:38.459 "data_size": 63488 00:14:38.459 }, 00:14:38.459 { 00:14:38.459 "name": null, 00:14:38.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.459 "is_configured": false, 00:14:38.459 "data_offset": 2048, 00:14:38.459 "data_size": 63488 00:14:38.459 }, 00:14:38.459 { 00:14:38.459 "name": "BaseBdev3", 00:14:38.459 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:38.459 "is_configured": true, 00:14:38.459 "data_offset": 2048, 00:14:38.459 "data_size": 63488 00:14:38.459 }, 00:14:38.459 { 00:14:38.459 "name": "BaseBdev4", 00:14:38.459 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:38.459 "is_configured": true, 00:14:38.459 "data_offset": 2048, 00:14:38.459 "data_size": 63488 00:14:38.459 } 00:14:38.459 ] 00:14:38.459 }' 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 [2024-11-17 01:34:46.778291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:38.459 [2024-11-17 01:34:46.778407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.459 [2024-11-17 01:34:46.778442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:38.459 [2024-11-17 01:34:46.778472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.459 [2024-11-17 01:34:46.778945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.459 [2024-11-17 01:34:46.779007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.459 [2024-11-17 01:34:46.779126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:38.459 [2024-11-17 01:34:46.779172] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.459 [2024-11-17 01:34:46.779211] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:38.459 [2024-11-17 01:34:46.779269] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:38.459 BaseBdev1 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 01:34:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.397 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.398 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.398 01:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.398 01:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.398 01:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.398 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.398 "name": "raid_bdev1", 00:14:39.398 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:39.398 "strip_size_kb": 0, 00:14:39.398 "state": "online", 00:14:39.398 "raid_level": "raid1", 00:14:39.398 "superblock": true, 00:14:39.398 "num_base_bdevs": 4, 00:14:39.398 "num_base_bdevs_discovered": 2, 00:14:39.398 "num_base_bdevs_operational": 2, 00:14:39.398 "base_bdevs_list": [ 00:14:39.398 { 00:14:39.398 "name": null, 00:14:39.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.398 "is_configured": false, 00:14:39.398 "data_offset": 0, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": null, 00:14:39.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.398 "is_configured": false, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": "BaseBdev3", 00:14:39.398 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": "BaseBdev4", 00:14:39.398 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 } 00:14:39.398 ] 00:14:39.398 }' 00:14:39.398 01:34:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.398 01:34:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.978 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.978 "name": "raid_bdev1", 00:14:39.978 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:39.978 "strip_size_kb": 0, 00:14:39.978 "state": "online", 00:14:39.978 "raid_level": "raid1", 00:14:39.978 "superblock": true, 00:14:39.978 "num_base_bdevs": 4, 00:14:39.978 "num_base_bdevs_discovered": 2, 00:14:39.978 "num_base_bdevs_operational": 2, 00:14:39.978 "base_bdevs_list": [ 00:14:39.978 { 00:14:39.978 "name": null, 00:14:39.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.978 "is_configured": false, 00:14:39.978 "data_offset": 0, 00:14:39.978 "data_size": 63488 00:14:39.978 }, 00:14:39.978 { 00:14:39.978 "name": null, 00:14:39.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.978 "is_configured": false, 00:14:39.978 "data_offset": 2048, 00:14:39.978 "data_size": 63488 00:14:39.978 }, 00:14:39.978 { 00:14:39.979 "name": "BaseBdev3", 00:14:39.979 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:39.979 "is_configured": true, 00:14:39.979 "data_offset": 2048, 00:14:39.979 "data_size": 63488 00:14:39.979 }, 00:14:39.979 { 00:14:39.979 "name": "BaseBdev4", 00:14:39.979 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:39.979 "is_configured": true, 00:14:39.979 "data_offset": 2048, 00:14:39.979 "data_size": 63488 00:14:39.979 } 00:14:39.979 ] 00:14:39.979 }' 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.979 [2024-11-17 01:34:48.347595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.979 [2024-11-17 01:34:48.347797] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:39.979 [2024-11-17 01:34:48.347812] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:39.979 request: 00:14:39.979 { 00:14:39.979 "base_bdev": "BaseBdev1", 00:14:39.979 "raid_bdev": "raid_bdev1", 00:14:39.979 "method": "bdev_raid_add_base_bdev", 00:14:39.979 "req_id": 1 00:14:39.979 } 00:14:39.979 Got JSON-RPC error response 00:14:39.979 response: 00:14:39.979 { 00:14:39.979 "code": -22, 00:14:39.979 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:39.979 } 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:39.979 01:34:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.920 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.178 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.178 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.178 "name": "raid_bdev1", 00:14:41.178 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:41.178 "strip_size_kb": 0, 00:14:41.178 "state": "online", 00:14:41.178 "raid_level": "raid1", 00:14:41.178 "superblock": true, 00:14:41.178 "num_base_bdevs": 4, 00:14:41.178 "num_base_bdevs_discovered": 2, 00:14:41.178 "num_base_bdevs_operational": 2, 00:14:41.178 "base_bdevs_list": [ 00:14:41.178 { 00:14:41.178 "name": null, 00:14:41.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.178 "is_configured": false, 00:14:41.178 "data_offset": 0, 00:14:41.178 "data_size": 63488 00:14:41.178 }, 00:14:41.178 { 00:14:41.178 "name": null, 00:14:41.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.178 "is_configured": false, 00:14:41.178 "data_offset": 2048, 00:14:41.178 "data_size": 63488 00:14:41.178 }, 00:14:41.178 { 00:14:41.178 "name": "BaseBdev3", 00:14:41.178 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:41.178 "is_configured": true, 00:14:41.178 "data_offset": 2048, 00:14:41.178 "data_size": 63488 00:14:41.178 }, 00:14:41.178 { 00:14:41.178 "name": "BaseBdev4", 00:14:41.178 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:41.178 "is_configured": true, 00:14:41.178 "data_offset": 2048, 00:14:41.178 "data_size": 63488 00:14:41.178 } 00:14:41.178 ] 00:14:41.178 }' 00:14:41.178 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.178 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.436 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.436 "name": "raid_bdev1", 00:14:41.436 "uuid": "8ecd1182-7066-443c-a388-58f2aeb039bf", 00:14:41.436 "strip_size_kb": 0, 00:14:41.436 "state": "online", 00:14:41.436 "raid_level": "raid1", 00:14:41.436 "superblock": true, 00:14:41.436 "num_base_bdevs": 4, 00:14:41.436 "num_base_bdevs_discovered": 2, 00:14:41.436 "num_base_bdevs_operational": 2, 00:14:41.436 "base_bdevs_list": [ 00:14:41.436 { 00:14:41.436 "name": null, 00:14:41.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.436 "is_configured": false, 00:14:41.436 "data_offset": 0, 00:14:41.436 "data_size": 63488 00:14:41.436 }, 00:14:41.436 { 00:14:41.436 "name": null, 00:14:41.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.436 "is_configured": false, 00:14:41.436 "data_offset": 2048, 00:14:41.436 "data_size": 63488 00:14:41.436 }, 00:14:41.436 { 00:14:41.436 "name": "BaseBdev3", 00:14:41.436 "uuid": "2efcb4be-8d55-5869-ba7f-727e2bcbbe3b", 00:14:41.436 "is_configured": true, 00:14:41.436 "data_offset": 2048, 00:14:41.436 "data_size": 63488 00:14:41.436 }, 00:14:41.436 { 00:14:41.436 "name": "BaseBdev4", 00:14:41.436 "uuid": "531ec2cd-b155-5e2f-b1b4-b3d55b29f993", 00:14:41.436 "is_configured": true, 00:14:41.436 "data_offset": 2048, 00:14:41.436 "data_size": 63488 00:14:41.436 } 00:14:41.436 ] 00:14:41.436 }' 00:14:41.437 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.437 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.437 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77750 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77750 ']' 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77750 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77750 00:14:41.696 killing process with pid 77750 00:14:41.696 Received shutdown signal, test time was about 60.000000 seconds 00:14:41.696 00:14:41.696 Latency(us) 00:14:41.696 [2024-11-17T01:34:50.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.696 [2024-11-17T01:34:50.156Z] =================================================================================================================== 00:14:41.696 [2024-11-17T01:34:50.156Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77750' 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77750 00:14:41.696 [2024-11-17 01:34:49.974857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.696 [2024-11-17 01:34:49.974972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.696 [2024-11-17 01:34:49.975036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.696 [2024-11-17 01:34:49.975089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:41.696 01:34:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77750 00:14:42.264 [2024-11-17 01:34:50.435100] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:43.202 00:14:43.202 real 0m24.268s 00:14:43.202 user 0m29.427s 00:14:43.202 sys 0m3.671s 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.202 ************************************ 00:14:43.202 END TEST raid_rebuild_test_sb 00:14:43.202 ************************************ 00:14:43.202 01:34:51 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:43.202 01:34:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:43.202 01:34:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.202 01:34:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.202 ************************************ 00:14:43.202 START TEST raid_rebuild_test_io 00:14:43.202 ************************************ 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78501 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78501 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78501 ']' 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.202 01:34:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.202 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.202 Zero copy mechanism will not be used. 00:14:43.202 [2024-11-17 01:34:51.650001] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:43.202 [2024-11-17 01:34:51.650140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78501 ] 00:14:43.462 [2024-11-17 01:34:51.830360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.723 [2024-11-17 01:34:51.939643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.723 [2024-11-17 01:34:52.143120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.723 [2024-11-17 01:34:52.143176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 BaseBdev1_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 [2024-11-17 01:34:52.506339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.293 [2024-11-17 01:34:52.506486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.293 [2024-11-17 01:34:52.506545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.293 [2024-11-17 01:34:52.506588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.293 [2024-11-17 01:34:52.508749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.293 [2024-11-17 01:34:52.508841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.293 BaseBdev1 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 BaseBdev2_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 [2024-11-17 01:34:52.560322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.293 [2024-11-17 01:34:52.560427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.293 [2024-11-17 01:34:52.560462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.293 [2024-11-17 01:34:52.560492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.293 [2024-11-17 01:34:52.562497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.293 [2024-11-17 01:34:52.562574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.293 BaseBdev2 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 BaseBdev3_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 [2024-11-17 01:34:52.650224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:44.293 [2024-11-17 01:34:52.650279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.293 [2024-11-17 01:34:52.650300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:44.293 [2024-11-17 01:34:52.650311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.293 [2024-11-17 01:34:52.652639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.293 [2024-11-17 01:34:52.652686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.293 BaseBdev3 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 BaseBdev4_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.293 [2024-11-17 01:34:52.703811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:44.293 [2024-11-17 01:34:52.703920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.293 [2024-11-17 01:34:52.703958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:44.293 [2024-11-17 01:34:52.703989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.293 [2024-11-17 01:34:52.706012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.293 [2024-11-17 01:34:52.706084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:44.293 BaseBdev4 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.293 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.553 spare_malloc 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.553 spare_delay 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.553 [2024-11-17 01:34:52.770516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.553 [2024-11-17 01:34:52.770638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.553 [2024-11-17 01:34:52.770675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:44.553 [2024-11-17 01:34:52.770723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.553 [2024-11-17 01:34:52.772747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.553 [2024-11-17 01:34:52.772847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.553 spare 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.553 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.553 [2024-11-17 01:34:52.782543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.553 [2024-11-17 01:34:52.784328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.553 [2024-11-17 01:34:52.784438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.553 [2024-11-17 01:34:52.784526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.553 [2024-11-17 01:34:52.784638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:44.553 [2024-11-17 01:34:52.784680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:44.553 [2024-11-17 01:34:52.784945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:44.553 [2024-11-17 01:34:52.785149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:44.553 [2024-11-17 01:34:52.785194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:44.554 [2024-11-17 01:34:52.785377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.554 "name": "raid_bdev1", 00:14:44.554 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:44.554 "strip_size_kb": 0, 00:14:44.554 "state": "online", 00:14:44.554 "raid_level": "raid1", 00:14:44.554 "superblock": false, 00:14:44.554 "num_base_bdevs": 4, 00:14:44.554 "num_base_bdevs_discovered": 4, 00:14:44.554 "num_base_bdevs_operational": 4, 00:14:44.554 "base_bdevs_list": [ 00:14:44.554 { 00:14:44.554 "name": "BaseBdev1", 00:14:44.554 "uuid": "1b968b92-52a1-588c-9363-45cd39362bfc", 00:14:44.554 "is_configured": true, 00:14:44.554 "data_offset": 0, 00:14:44.554 "data_size": 65536 00:14:44.554 }, 00:14:44.554 { 00:14:44.554 "name": "BaseBdev2", 00:14:44.554 "uuid": "601c63af-d0a4-5f4c-9cce-b8e5196e854f", 00:14:44.554 "is_configured": true, 00:14:44.554 "data_offset": 0, 00:14:44.554 "data_size": 65536 00:14:44.554 }, 00:14:44.554 { 00:14:44.554 "name": "BaseBdev3", 00:14:44.554 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:44.554 "is_configured": true, 00:14:44.554 "data_offset": 0, 00:14:44.554 "data_size": 65536 00:14:44.554 }, 00:14:44.554 { 00:14:44.554 "name": "BaseBdev4", 00:14:44.554 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:44.554 "is_configured": true, 00:14:44.554 "data_offset": 0, 00:14:44.554 "data_size": 65536 00:14:44.554 } 00:14:44.554 ] 00:14:44.554 }' 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.554 01:34:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.812 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:44.812 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.813 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.813 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.813 [2024-11-17 01:34:53.250042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.071 [2024-11-17 01:34:53.325551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.071 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.072 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.072 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.072 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.072 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.072 "name": "raid_bdev1", 00:14:45.072 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:45.072 "strip_size_kb": 0, 00:14:45.072 "state": "online", 00:14:45.072 "raid_level": "raid1", 00:14:45.072 "superblock": false, 00:14:45.072 "num_base_bdevs": 4, 00:14:45.072 "num_base_bdevs_discovered": 3, 00:14:45.072 "num_base_bdevs_operational": 3, 00:14:45.072 "base_bdevs_list": [ 00:14:45.072 { 00:14:45.072 "name": null, 00:14:45.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.072 "is_configured": false, 00:14:45.072 "data_offset": 0, 00:14:45.072 "data_size": 65536 00:14:45.072 }, 00:14:45.072 { 00:14:45.072 "name": "BaseBdev2", 00:14:45.072 "uuid": "601c63af-d0a4-5f4c-9cce-b8e5196e854f", 00:14:45.072 "is_configured": true, 00:14:45.072 "data_offset": 0, 00:14:45.072 "data_size": 65536 00:14:45.072 }, 00:14:45.072 { 00:14:45.072 "name": "BaseBdev3", 00:14:45.072 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:45.072 "is_configured": true, 00:14:45.072 "data_offset": 0, 00:14:45.072 "data_size": 65536 00:14:45.072 }, 00:14:45.072 { 00:14:45.072 "name": "BaseBdev4", 00:14:45.072 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:45.072 "is_configured": true, 00:14:45.072 "data_offset": 0, 00:14:45.072 "data_size": 65536 00:14:45.072 } 00:14:45.072 ] 00:14:45.072 }' 00:14:45.072 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.072 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.072 [2024-11-17 01:34:53.421865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:45.072 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:45.072 Zero copy mechanism will not be used. 00:14:45.072 Running I/O for 60 seconds... 00:14:45.332 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.332 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.332 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.332 [2024-11-17 01:34:53.746449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.332 01:34:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.332 01:34:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:45.592 [2024-11-17 01:34:53.805047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:45.592 [2024-11-17 01:34:53.807012] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.592 [2024-11-17 01:34:53.915126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.592 [2024-11-17 01:34:53.915687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.852 [2024-11-17 01:34:54.133561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.852 [2024-11-17 01:34:54.134375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.112 176.00 IOPS, 528.00 MiB/s [2024-11-17T01:34:54.572Z] [2024-11-17 01:34:54.468586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:46.112 [2024-11-17 01:34:54.469131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:46.372 [2024-11-17 01:34:54.593388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.372 01:34:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.632 "name": "raid_bdev1", 00:14:46.632 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:46.632 "strip_size_kb": 0, 00:14:46.632 "state": "online", 00:14:46.632 "raid_level": "raid1", 00:14:46.632 "superblock": false, 00:14:46.632 "num_base_bdevs": 4, 00:14:46.632 "num_base_bdevs_discovered": 4, 00:14:46.632 "num_base_bdevs_operational": 4, 00:14:46.632 "process": { 00:14:46.632 "type": "rebuild", 00:14:46.632 "target": "spare", 00:14:46.632 "progress": { 00:14:46.632 "blocks": 10240, 00:14:46.632 "percent": 15 00:14:46.632 } 00:14:46.632 }, 00:14:46.632 "base_bdevs_list": [ 00:14:46.632 { 00:14:46.632 "name": "spare", 00:14:46.632 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:46.632 "is_configured": true, 00:14:46.632 "data_offset": 0, 00:14:46.632 "data_size": 65536 00:14:46.632 }, 00:14:46.632 { 00:14:46.632 "name": "BaseBdev2", 00:14:46.632 "uuid": "601c63af-d0a4-5f4c-9cce-b8e5196e854f", 00:14:46.632 "is_configured": true, 00:14:46.632 "data_offset": 0, 00:14:46.632 "data_size": 65536 00:14:46.632 }, 00:14:46.632 { 00:14:46.632 "name": "BaseBdev3", 00:14:46.632 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:46.632 "is_configured": true, 00:14:46.632 "data_offset": 0, 00:14:46.632 "data_size": 65536 00:14:46.632 }, 00:14:46.632 { 00:14:46.632 "name": "BaseBdev4", 00:14:46.632 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:46.632 "is_configured": true, 00:14:46.632 "data_offset": 0, 00:14:46.632 "data_size": 65536 00:14:46.632 } 00:14:46.632 ] 00:14:46.632 }' 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.632 01:34:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.632 [2024-11-17 01:34:54.939412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.632 [2024-11-17 01:34:55.024967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:46.632 [2024-11-17 01:34:55.080456] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.632 [2024-11-17 01:34:55.085315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.632 [2024-11-17 01:34:55.085408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.632 [2024-11-17 01:34:55.085437] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.892 [2024-11-17 01:34:55.114972] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.892 "name": "raid_bdev1", 00:14:46.892 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:46.892 "strip_size_kb": 0, 00:14:46.892 "state": "online", 00:14:46.892 "raid_level": "raid1", 00:14:46.892 "superblock": false, 00:14:46.892 "num_base_bdevs": 4, 00:14:46.892 "num_base_bdevs_discovered": 3, 00:14:46.892 "num_base_bdevs_operational": 3, 00:14:46.892 "base_bdevs_list": [ 00:14:46.892 { 00:14:46.892 "name": null, 00:14:46.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.892 "is_configured": false, 00:14:46.892 "data_offset": 0, 00:14:46.892 "data_size": 65536 00:14:46.892 }, 00:14:46.892 { 00:14:46.892 "name": "BaseBdev2", 00:14:46.892 "uuid": "601c63af-d0a4-5f4c-9cce-b8e5196e854f", 00:14:46.892 "is_configured": true, 00:14:46.892 "data_offset": 0, 00:14:46.892 "data_size": 65536 00:14:46.892 }, 00:14:46.892 { 00:14:46.892 "name": "BaseBdev3", 00:14:46.892 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:46.892 "is_configured": true, 00:14:46.892 "data_offset": 0, 00:14:46.892 "data_size": 65536 00:14:46.892 }, 00:14:46.892 { 00:14:46.892 "name": "BaseBdev4", 00:14:46.892 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:46.892 "is_configured": true, 00:14:46.892 "data_offset": 0, 00:14:46.892 "data_size": 65536 00:14:46.892 } 00:14:46.892 ] 00:14:46.892 }' 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.892 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.152 158.50 IOPS, 475.50 MiB/s [2024-11-17T01:34:55.612Z] 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.152 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.411 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.411 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.411 "name": "raid_bdev1", 00:14:47.411 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:47.411 "strip_size_kb": 0, 00:14:47.411 "state": "online", 00:14:47.411 "raid_level": "raid1", 00:14:47.411 "superblock": false, 00:14:47.411 "num_base_bdevs": 4, 00:14:47.411 "num_base_bdevs_discovered": 3, 00:14:47.411 "num_base_bdevs_operational": 3, 00:14:47.411 "base_bdevs_list": [ 00:14:47.411 { 00:14:47.411 "name": null, 00:14:47.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.412 "is_configured": false, 00:14:47.412 "data_offset": 0, 00:14:47.412 "data_size": 65536 00:14:47.412 }, 00:14:47.412 { 00:14:47.412 "name": "BaseBdev2", 00:14:47.412 "uuid": "601c63af-d0a4-5f4c-9cce-b8e5196e854f", 00:14:47.412 "is_configured": true, 00:14:47.412 "data_offset": 0, 00:14:47.412 "data_size": 65536 00:14:47.412 }, 00:14:47.412 { 00:14:47.412 "name": "BaseBdev3", 00:14:47.412 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:47.412 "is_configured": true, 00:14:47.412 "data_offset": 0, 00:14:47.412 "data_size": 65536 00:14:47.412 }, 00:14:47.412 { 00:14:47.412 "name": "BaseBdev4", 00:14:47.412 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:47.412 "is_configured": true, 00:14:47.412 "data_offset": 0, 00:14:47.412 "data_size": 65536 00:14:47.412 } 00:14:47.412 ] 00:14:47.412 }' 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.412 [2024-11-17 01:34:55.731865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.412 01:34:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:47.412 [2024-11-17 01:34:55.796475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:47.412 [2024-11-17 01:34:55.798483] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.672 [2024-11-17 01:34:55.911902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.672 [2024-11-17 01:34:55.913162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.672 [2024-11-17 01:34:56.122827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.672 [2024-11-17 01:34:56.123234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:48.242 161.33 IOPS, 484.00 MiB/s [2024-11-17T01:34:56.702Z] [2024-11-17 01:34:56.458484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:48.242 [2024-11-17 01:34:56.458889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:48.242 [2024-11-17 01:34:56.670663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:48.242 [2024-11-17 01:34:56.671014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.502 "name": "raid_bdev1", 00:14:48.502 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:48.502 "strip_size_kb": 0, 00:14:48.502 "state": "online", 00:14:48.502 "raid_level": "raid1", 00:14:48.502 "superblock": false, 00:14:48.502 "num_base_bdevs": 4, 00:14:48.502 "num_base_bdevs_discovered": 4, 00:14:48.502 "num_base_bdevs_operational": 4, 00:14:48.502 "process": { 00:14:48.502 "type": "rebuild", 00:14:48.502 "target": "spare", 00:14:48.502 "progress": { 00:14:48.502 "blocks": 10240, 00:14:48.502 "percent": 15 00:14:48.502 } 00:14:48.502 }, 00:14:48.502 "base_bdevs_list": [ 00:14:48.502 { 00:14:48.502 "name": "spare", 00:14:48.502 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:48.502 "is_configured": true, 00:14:48.502 "data_offset": 0, 00:14:48.502 "data_size": 65536 00:14:48.502 }, 00:14:48.502 { 00:14:48.502 "name": "BaseBdev2", 00:14:48.502 "uuid": "601c63af-d0a4-5f4c-9cce-b8e5196e854f", 00:14:48.502 "is_configured": true, 00:14:48.502 "data_offset": 0, 00:14:48.502 "data_size": 65536 00:14:48.502 }, 00:14:48.502 { 00:14:48.502 "name": "BaseBdev3", 00:14:48.502 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:48.502 "is_configured": true, 00:14:48.502 "data_offset": 0, 00:14:48.502 "data_size": 65536 00:14:48.502 }, 00:14:48.502 { 00:14:48.502 "name": "BaseBdev4", 00:14:48.502 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:48.502 "is_configured": true, 00:14:48.502 "data_offset": 0, 00:14:48.502 "data_size": 65536 00:14:48.502 } 00:14:48.502 ] 00:14:48.502 }' 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.502 01:34:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.502 [2024-11-17 01:34:56.916925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:48.502 [2024-11-17 01:34:56.926471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.761 [2024-11-17 01:34:57.141031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:48.762 [2024-11-17 01:34:57.141412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.020 [2024-11-17 01:34:57.244064] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:49.020 [2024-11-17 01:34:57.244140] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:49.020 [2024-11-17 01:34:57.245536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.020 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.020 "name": "raid_bdev1", 00:14:49.020 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:49.020 "strip_size_kb": 0, 00:14:49.020 "state": "online", 00:14:49.020 "raid_level": "raid1", 00:14:49.020 "superblock": false, 00:14:49.020 "num_base_bdevs": 4, 00:14:49.020 "num_base_bdevs_discovered": 3, 00:14:49.020 "num_base_bdevs_operational": 3, 00:14:49.020 "process": { 00:14:49.020 "type": "rebuild", 00:14:49.020 "target": "spare", 00:14:49.020 "progress": { 00:14:49.020 "blocks": 16384, 00:14:49.020 "percent": 25 00:14:49.020 } 00:14:49.020 }, 00:14:49.020 "base_bdevs_list": [ 00:14:49.020 { 00:14:49.020 "name": "spare", 00:14:49.020 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:49.020 "is_configured": true, 00:14:49.020 "data_offset": 0, 00:14:49.020 "data_size": 65536 00:14:49.020 }, 00:14:49.021 { 00:14:49.021 "name": null, 00:14:49.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.021 "is_configured": false, 00:14:49.021 "data_offset": 0, 00:14:49.021 "data_size": 65536 00:14:49.021 }, 00:14:49.021 { 00:14:49.021 "name": "BaseBdev3", 00:14:49.021 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:49.021 "is_configured": true, 00:14:49.021 "data_offset": 0, 00:14:49.021 "data_size": 65536 00:14:49.021 }, 00:14:49.021 { 00:14:49.021 "name": "BaseBdev4", 00:14:49.021 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:49.021 "is_configured": true, 00:14:49.021 "data_offset": 0, 00:14:49.021 "data_size": 65536 00:14:49.021 } 00:14:49.021 ] 00:14:49.021 }' 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=471 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.021 133.00 IOPS, 399.00 MiB/s [2024-11-17T01:34:57.481Z] 01:34:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.021 "name": "raid_bdev1", 00:14:49.021 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:49.021 "strip_size_kb": 0, 00:14:49.021 "state": "online", 00:14:49.021 "raid_level": "raid1", 00:14:49.021 "superblock": false, 00:14:49.021 "num_base_bdevs": 4, 00:14:49.021 "num_base_bdevs_discovered": 3, 00:14:49.021 "num_base_bdevs_operational": 3, 00:14:49.021 "process": { 00:14:49.021 "type": "rebuild", 00:14:49.021 "target": "spare", 00:14:49.021 "progress": { 00:14:49.021 "blocks": 18432, 00:14:49.021 "percent": 28 00:14:49.021 } 00:14:49.021 }, 00:14:49.021 "base_bdevs_list": [ 00:14:49.021 { 00:14:49.021 "name": "spare", 00:14:49.021 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:49.021 "is_configured": true, 00:14:49.021 "data_offset": 0, 00:14:49.021 "data_size": 65536 00:14:49.021 }, 00:14:49.021 { 00:14:49.021 "name": null, 00:14:49.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.021 "is_configured": false, 00:14:49.021 "data_offset": 0, 00:14:49.021 "data_size": 65536 00:14:49.021 }, 00:14:49.021 { 00:14:49.021 "name": "BaseBdev3", 00:14:49.021 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:49.021 "is_configured": true, 00:14:49.021 "data_offset": 0, 00:14:49.021 "data_size": 65536 00:14:49.021 }, 00:14:49.021 { 00:14:49.021 "name": "BaseBdev4", 00:14:49.021 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:49.021 "is_configured": true, 00:14:49.021 "data_offset": 0, 00:14:49.021 "data_size": 65536 00:14:49.021 } 00:14:49.021 ] 00:14:49.021 }' 00:14:49.021 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.281 [2024-11-17 01:34:57.480747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:49.281 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.281 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.281 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.281 01:34:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.281 [2024-11-17 01:34:57.695337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:49.281 [2024-11-17 01:34:57.695962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:49.849 [2024-11-17 01:34:58.046064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:50.109 119.60 IOPS, 358.80 MiB/s [2024-11-17T01:34:58.569Z] [2024-11-17 01:34:58.494517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.109 01:34:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.369 01:34:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.369 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.369 "name": "raid_bdev1", 00:14:50.369 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:50.369 "strip_size_kb": 0, 00:14:50.369 "state": "online", 00:14:50.369 "raid_level": "raid1", 00:14:50.369 "superblock": false, 00:14:50.369 "num_base_bdevs": 4, 00:14:50.369 "num_base_bdevs_discovered": 3, 00:14:50.369 "num_base_bdevs_operational": 3, 00:14:50.369 "process": { 00:14:50.369 "type": "rebuild", 00:14:50.369 "target": "spare", 00:14:50.369 "progress": { 00:14:50.369 "blocks": 34816, 00:14:50.369 "percent": 53 00:14:50.369 } 00:14:50.369 }, 00:14:50.369 "base_bdevs_list": [ 00:14:50.369 { 00:14:50.369 "name": "spare", 00:14:50.369 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:50.369 "is_configured": true, 00:14:50.369 "data_offset": 0, 00:14:50.369 "data_size": 65536 00:14:50.369 }, 00:14:50.369 { 00:14:50.369 "name": null, 00:14:50.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.369 "is_configured": false, 00:14:50.369 "data_offset": 0, 00:14:50.369 "data_size": 65536 00:14:50.369 }, 00:14:50.369 { 00:14:50.369 "name": "BaseBdev3", 00:14:50.369 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:50.369 "is_configured": true, 00:14:50.369 "data_offset": 0, 00:14:50.369 "data_size": 65536 00:14:50.369 }, 00:14:50.369 { 00:14:50.369 "name": "BaseBdev4", 00:14:50.369 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:50.369 "is_configured": true, 00:14:50.369 "data_offset": 0, 00:14:50.369 "data_size": 65536 00:14:50.369 } 00:14:50.369 ] 00:14:50.369 }' 00:14:50.369 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.369 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.369 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.369 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.369 01:34:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.370 [2024-11-17 01:34:58.735735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:50.629 [2024-11-17 01:34:58.937831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:50.629 [2024-11-17 01:34:58.938118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:51.458 106.67 IOPS, 320.00 MiB/s [2024-11-17T01:34:59.918Z] [2024-11-17 01:34:59.703494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:51.458 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.458 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.458 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.458 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.458 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.458 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.458 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.459 "name": "raid_bdev1", 00:14:51.459 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:51.459 "strip_size_kb": 0, 00:14:51.459 "state": "online", 00:14:51.459 "raid_level": "raid1", 00:14:51.459 "superblock": false, 00:14:51.459 "num_base_bdevs": 4, 00:14:51.459 "num_base_bdevs_discovered": 3, 00:14:51.459 "num_base_bdevs_operational": 3, 00:14:51.459 "process": { 00:14:51.459 "type": "rebuild", 00:14:51.459 "target": "spare", 00:14:51.459 "progress": { 00:14:51.459 "blocks": 53248, 00:14:51.459 "percent": 81 00:14:51.459 } 00:14:51.459 }, 00:14:51.459 "base_bdevs_list": [ 00:14:51.459 { 00:14:51.459 "name": "spare", 00:14:51.459 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:51.459 "is_configured": true, 00:14:51.459 "data_offset": 0, 00:14:51.459 "data_size": 65536 00:14:51.459 }, 00:14:51.459 { 00:14:51.459 "name": null, 00:14:51.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.459 "is_configured": false, 00:14:51.459 "data_offset": 0, 00:14:51.459 "data_size": 65536 00:14:51.459 }, 00:14:51.459 { 00:14:51.459 "name": "BaseBdev3", 00:14:51.459 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:51.459 "is_configured": true, 00:14:51.459 "data_offset": 0, 00:14:51.459 "data_size": 65536 00:14:51.459 }, 00:14:51.459 { 00:14:51.459 "name": "BaseBdev4", 00:14:51.459 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:51.459 "is_configured": true, 00:14:51.459 "data_offset": 0, 00:14:51.459 "data_size": 65536 00:14:51.459 } 00:14:51.459 ] 00:14:51.459 }' 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.459 01:34:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.719 [2024-11-17 01:35:00.140662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:52.239 97.14 IOPS, 291.43 MiB/s [2024-11-17T01:35:00.699Z] [2024-11-17 01:35:00.588452] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:52.239 [2024-11-17 01:35:00.693388] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:52.239 [2024-11-17 01:35:00.695257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.499 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.499 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.499 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.500 "name": "raid_bdev1", 00:14:52.500 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:52.500 "strip_size_kb": 0, 00:14:52.500 "state": "online", 00:14:52.500 "raid_level": "raid1", 00:14:52.500 "superblock": false, 00:14:52.500 "num_base_bdevs": 4, 00:14:52.500 "num_base_bdevs_discovered": 3, 00:14:52.500 "num_base_bdevs_operational": 3, 00:14:52.500 "base_bdevs_list": [ 00:14:52.500 { 00:14:52.500 "name": "spare", 00:14:52.500 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:52.500 "is_configured": true, 00:14:52.500 "data_offset": 0, 00:14:52.500 "data_size": 65536 00:14:52.500 }, 00:14:52.500 { 00:14:52.500 "name": null, 00:14:52.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.500 "is_configured": false, 00:14:52.500 "data_offset": 0, 00:14:52.500 "data_size": 65536 00:14:52.500 }, 00:14:52.500 { 00:14:52.500 "name": "BaseBdev3", 00:14:52.500 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:52.500 "is_configured": true, 00:14:52.500 "data_offset": 0, 00:14:52.500 "data_size": 65536 00:14:52.500 }, 00:14:52.500 { 00:14:52.500 "name": "BaseBdev4", 00:14:52.500 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:52.500 "is_configured": true, 00:14:52.500 "data_offset": 0, 00:14:52.500 "data_size": 65536 00:14:52.500 } 00:14:52.500 ] 00:14:52.500 }' 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.500 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:52.760 01:35:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.760 "name": "raid_bdev1", 00:14:52.760 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:52.760 "strip_size_kb": 0, 00:14:52.760 "state": "online", 00:14:52.760 "raid_level": "raid1", 00:14:52.760 "superblock": false, 00:14:52.760 "num_base_bdevs": 4, 00:14:52.760 "num_base_bdevs_discovered": 3, 00:14:52.760 "num_base_bdevs_operational": 3, 00:14:52.760 "base_bdevs_list": [ 00:14:52.760 { 00:14:52.760 "name": "spare", 00:14:52.760 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:52.760 "is_configured": true, 00:14:52.760 "data_offset": 0, 00:14:52.760 "data_size": 65536 00:14:52.760 }, 00:14:52.760 { 00:14:52.760 "name": null, 00:14:52.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.760 "is_configured": false, 00:14:52.760 "data_offset": 0, 00:14:52.760 "data_size": 65536 00:14:52.760 }, 00:14:52.760 { 00:14:52.760 "name": "BaseBdev3", 00:14:52.760 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:52.760 "is_configured": true, 00:14:52.760 "data_offset": 0, 00:14:52.760 "data_size": 65536 00:14:52.760 }, 00:14:52.760 { 00:14:52.760 "name": "BaseBdev4", 00:14:52.760 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:52.760 "is_configured": true, 00:14:52.760 "data_offset": 0, 00:14:52.760 "data_size": 65536 00:14:52.760 } 00:14:52.760 ] 00:14:52.760 }' 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.760 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.760 "name": "raid_bdev1", 00:14:52.760 "uuid": "ac803825-4265-4552-8dc8-4acc69247b83", 00:14:52.760 "strip_size_kb": 0, 00:14:52.760 "state": "online", 00:14:52.760 "raid_level": "raid1", 00:14:52.760 "superblock": false, 00:14:52.760 "num_base_bdevs": 4, 00:14:52.760 "num_base_bdevs_discovered": 3, 00:14:52.760 "num_base_bdevs_operational": 3, 00:14:52.760 "base_bdevs_list": [ 00:14:52.760 { 00:14:52.760 "name": "spare", 00:14:52.760 "uuid": "e8d30d48-650f-50da-9f86-ba3e6b8b2cbc", 00:14:52.760 "is_configured": true, 00:14:52.761 "data_offset": 0, 00:14:52.761 "data_size": 65536 00:14:52.761 }, 00:14:52.761 { 00:14:52.761 "name": null, 00:14:52.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.761 "is_configured": false, 00:14:52.761 "data_offset": 0, 00:14:52.761 "data_size": 65536 00:14:52.761 }, 00:14:52.761 { 00:14:52.761 "name": "BaseBdev3", 00:14:52.761 "uuid": "542a4fe7-50a5-53f8-9d01-ac3f78ffc26a", 00:14:52.761 "is_configured": true, 00:14:52.761 "data_offset": 0, 00:14:52.761 "data_size": 65536 00:14:52.761 }, 00:14:52.761 { 00:14:52.761 "name": "BaseBdev4", 00:14:52.761 "uuid": "141659e9-08e4-59a1-bbc7-e25136aa84fd", 00:14:52.761 "is_configured": true, 00:14:52.761 "data_offset": 0, 00:14:52.761 "data_size": 65536 00:14:52.761 } 00:14:52.761 ] 00:14:52.761 }' 00:14:52.761 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.761 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.280 89.75 IOPS, 269.25 MiB/s [2024-11-17T01:35:01.740Z] 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.280 [2024-11-17 01:35:01.563527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:53.280 [2024-11-17 01:35:01.563627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.280 00:14:53.280 Latency(us) 00:14:53.280 [2024-11-17T01:35:01.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.280 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:53.280 raid_bdev1 : 8.26 87.95 263.84 0.00 0.00 14810.80 334.48 114473.36 00:14:53.280 [2024-11-17T01:35:01.740Z] =================================================================================================================== 00:14:53.280 [2024-11-17T01:35:01.740Z] Total : 87.95 263.84 0.00 0.00 14810.80 334.48 114473.36 00:14:53.280 [2024-11-17 01:35:01.683485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.280 { 00:14:53.280 "results": [ 00:14:53.280 { 00:14:53.280 "job": "raid_bdev1", 00:14:53.280 "core_mask": "0x1", 00:14:53.280 "workload": "randrw", 00:14:53.280 "percentage": 50, 00:14:53.280 "status": "finished", 00:14:53.280 "queue_depth": 2, 00:14:53.280 "io_size": 3145728, 00:14:53.280 "runtime": 8.255136, 00:14:53.280 "iops": 87.94525008431116, 00:14:53.280 "mibps": 263.83575025293345, 00:14:53.280 "io_failed": 0, 00:14:53.280 "io_timeout": 0, 00:14:53.280 "avg_latency_us": 14810.803055565579, 00:14:53.280 "min_latency_us": 334.4768558951965, 00:14:53.280 "max_latency_us": 114473.36244541485 00:14:53.280 } 00:14:53.280 ], 00:14:53.280 "core_count": 1 00:14:53.280 } 00:14:53.280 [2024-11-17 01:35:01.683590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.280 [2024-11-17 01:35:01.683689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.280 [2024-11-17 01:35:01.683721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.280 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:53.540 /dev/nbd0 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.540 1+0 records in 00:14:53.540 1+0 records out 00:14:53.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547462 s, 7.5 MB/s 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.540 01:35:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:53.800 /dev/nbd1 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.800 1+0 records in 00:14:53.800 1+0 records out 00:14:53.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504754 s, 8.1 MB/s 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.800 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:54.060 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:54.060 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.060 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:54.060 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.060 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:54.060 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.060 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:54.320 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:54.320 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:54.320 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:54.320 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.321 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:54.581 /dev/nbd1 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.581 1+0 records in 00:14:54.581 1+0 records out 00:14:54.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538539 s, 7.6 MB/s 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.581 01:35:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.842 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78501 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78501 ']' 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78501 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78501 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78501' 00:14:55.102 killing process with pid 78501 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78501 00:14:55.102 Received shutdown signal, test time was about 10.043835 seconds 00:14:55.102 00:14:55.102 Latency(us) 00:14:55.102 [2024-11-17T01:35:03.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.102 [2024-11-17T01:35:03.562Z] =================================================================================================================== 00:14:55.102 [2024-11-17T01:35:03.562Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.102 [2024-11-17 01:35:03.448598] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.102 01:35:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78501 00:14:55.673 [2024-11-17 01:35:03.849984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.613 01:35:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:56.613 00:14:56.613 real 0m13.434s 00:14:56.613 user 0m16.827s 00:14:56.613 sys 0m1.896s 00:14:56.613 01:35:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.613 01:35:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.613 ************************************ 00:14:56.613 END TEST raid_rebuild_test_io 00:14:56.613 ************************************ 00:14:56.613 01:35:05 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:56.613 01:35:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:56.613 01:35:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.613 01:35:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.613 ************************************ 00:14:56.613 START TEST raid_rebuild_test_sb_io 00:14:56.613 ************************************ 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:56.613 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78911 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78911 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78911 ']' 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.874 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.874 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:56.874 Zero copy mechanism will not be used. 00:14:56.874 [2024-11-17 01:35:05.164898] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:56.874 [2024-11-17 01:35:05.165070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78911 ] 00:14:57.135 [2024-11-17 01:35:05.342831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.135 [2024-11-17 01:35:05.451297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.396 [2024-11-17 01:35:05.644333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.396 [2024-11-17 01:35:05.644476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.656 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.656 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:57.656 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.656 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:57.656 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.656 01:35:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.656 BaseBdev1_malloc 00:14:57.656 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.656 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:57.656 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.656 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.656 [2024-11-17 01:35:06.025459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:57.656 [2024-11-17 01:35:06.025584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.656 [2024-11-17 01:35:06.025625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:57.656 [2024-11-17 01:35:06.025655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.656 [2024-11-17 01:35:06.027704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.656 BaseBdev1 00:14:57.656 [2024-11-17 01:35:06.027790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:57.656 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.656 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.656 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.657 BaseBdev2_malloc 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.657 [2024-11-17 01:35:06.077497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:57.657 [2024-11-17 01:35:06.077594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.657 [2024-11-17 01:35:06.077629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:57.657 [2024-11-17 01:35:06.077659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.657 [2024-11-17 01:35:06.079671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.657 [2024-11-17 01:35:06.079750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.657 BaseBdev2 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.657 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 BaseBdev3_malloc 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 [2024-11-17 01:35:06.153615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:57.918 [2024-11-17 01:35:06.153718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.918 [2024-11-17 01:35:06.153753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:57.918 [2024-11-17 01:35:06.153790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.918 [2024-11-17 01:35:06.155782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.918 [2024-11-17 01:35:06.155868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:57.918 BaseBdev3 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 BaseBdev4_malloc 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 [2024-11-17 01:35:06.207266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:57.918 [2024-11-17 01:35:06.207372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.918 [2024-11-17 01:35:06.207406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:57.918 [2024-11-17 01:35:06.207435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.918 [2024-11-17 01:35:06.209399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.918 [2024-11-17 01:35:06.209489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:57.918 BaseBdev4 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 spare_malloc 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 spare_delay 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 [2024-11-17 01:35:06.273462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:57.918 [2024-11-17 01:35:06.273578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.918 [2024-11-17 01:35:06.273613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:57.918 [2024-11-17 01:35:06.273643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.918 [2024-11-17 01:35:06.275623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.918 [2024-11-17 01:35:06.275715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:57.918 spare 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.918 [2024-11-17 01:35:06.285486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.918 [2024-11-17 01:35:06.287264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.918 [2024-11-17 01:35:06.287385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.918 [2024-11-17 01:35:06.287458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.918 [2024-11-17 01:35:06.287673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:57.918 [2024-11-17 01:35:06.287723] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:57.918 [2024-11-17 01:35:06.287978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:57.918 [2024-11-17 01:35:06.288194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:57.918 [2024-11-17 01:35:06.288237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:57.918 [2024-11-17 01:35:06.288414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.918 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.919 "name": "raid_bdev1", 00:14:57.919 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:14:57.919 "strip_size_kb": 0, 00:14:57.919 "state": "online", 00:14:57.919 "raid_level": "raid1", 00:14:57.919 "superblock": true, 00:14:57.919 "num_base_bdevs": 4, 00:14:57.919 "num_base_bdevs_discovered": 4, 00:14:57.919 "num_base_bdevs_operational": 4, 00:14:57.919 "base_bdevs_list": [ 00:14:57.919 { 00:14:57.919 "name": "BaseBdev1", 00:14:57.919 "uuid": "54a20419-febe-5225-a7af-4bcbe11807cb", 00:14:57.919 "is_configured": true, 00:14:57.919 "data_offset": 2048, 00:14:57.919 "data_size": 63488 00:14:57.919 }, 00:14:57.919 { 00:14:57.919 "name": "BaseBdev2", 00:14:57.919 "uuid": "324504b6-b7e6-5195-9378-db73eb45f0f2", 00:14:57.919 "is_configured": true, 00:14:57.919 "data_offset": 2048, 00:14:57.919 "data_size": 63488 00:14:57.919 }, 00:14:57.919 { 00:14:57.919 "name": "BaseBdev3", 00:14:57.919 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:14:57.919 "is_configured": true, 00:14:57.919 "data_offset": 2048, 00:14:57.919 "data_size": 63488 00:14:57.919 }, 00:14:57.919 { 00:14:57.919 "name": "BaseBdev4", 00:14:57.919 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:14:57.919 "is_configured": true, 00:14:57.919 "data_offset": 2048, 00:14:57.919 "data_size": 63488 00:14:57.919 } 00:14:57.919 ] 00:14:57.919 }' 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.919 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:58.519 [2024-11-17 01:35:06.733052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 [2024-11-17 01:35:06.820545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.519 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.520 "name": "raid_bdev1", 00:14:58.520 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:14:58.520 "strip_size_kb": 0, 00:14:58.520 "state": "online", 00:14:58.520 "raid_level": "raid1", 00:14:58.520 "superblock": true, 00:14:58.520 "num_base_bdevs": 4, 00:14:58.520 "num_base_bdevs_discovered": 3, 00:14:58.520 "num_base_bdevs_operational": 3, 00:14:58.520 "base_bdevs_list": [ 00:14:58.520 { 00:14:58.520 "name": null, 00:14:58.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.520 "is_configured": false, 00:14:58.520 "data_offset": 0, 00:14:58.520 "data_size": 63488 00:14:58.520 }, 00:14:58.520 { 00:14:58.520 "name": "BaseBdev2", 00:14:58.520 "uuid": "324504b6-b7e6-5195-9378-db73eb45f0f2", 00:14:58.520 "is_configured": true, 00:14:58.520 "data_offset": 2048, 00:14:58.520 "data_size": 63488 00:14:58.520 }, 00:14:58.520 { 00:14:58.520 "name": "BaseBdev3", 00:14:58.520 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:14:58.520 "is_configured": true, 00:14:58.520 "data_offset": 2048, 00:14:58.520 "data_size": 63488 00:14:58.520 }, 00:14:58.520 { 00:14:58.520 "name": "BaseBdev4", 00:14:58.520 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:14:58.520 "is_configured": true, 00:14:58.520 "data_offset": 2048, 00:14:58.520 "data_size": 63488 00:14:58.520 } 00:14:58.520 ] 00:14:58.520 }' 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.520 01:35:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.520 [2024-11-17 01:35:06.911754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:58.520 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:58.520 Zero copy mechanism will not be used. 00:14:58.520 Running I/O for 60 seconds... 00:14:59.088 01:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.088 01:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.088 01:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.088 [2024-11-17 01:35:07.270646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.088 01:35:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.088 01:35:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.088 [2024-11-17 01:35:07.316883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:59.088 [2024-11-17 01:35:07.318836] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:59.088 [2024-11-17 01:35:07.441405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.088 [2024-11-17 01:35:07.442076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.348 [2024-11-17 01:35:07.653469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.348 [2024-11-17 01:35:07.653916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:59.608 [2024-11-17 01:35:07.880733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:59.608 [2024-11-17 01:35:07.882114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:59.868 198.00 IOPS, 594.00 MiB/s [2024-11-17T01:35:08.328Z] [2024-11-17 01:35:08.121809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.868 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.128 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.128 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.128 "name": "raid_bdev1", 00:15:00.128 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:00.128 "strip_size_kb": 0, 00:15:00.128 "state": "online", 00:15:00.128 "raid_level": "raid1", 00:15:00.128 "superblock": true, 00:15:00.128 "num_base_bdevs": 4, 00:15:00.128 "num_base_bdevs_discovered": 4, 00:15:00.128 "num_base_bdevs_operational": 4, 00:15:00.128 "process": { 00:15:00.128 "type": "rebuild", 00:15:00.128 "target": "spare", 00:15:00.128 "progress": { 00:15:00.128 "blocks": 10240, 00:15:00.128 "percent": 16 00:15:00.128 } 00:15:00.128 }, 00:15:00.128 "base_bdevs_list": [ 00:15:00.128 { 00:15:00.128 "name": "spare", 00:15:00.128 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:00.128 "is_configured": true, 00:15:00.128 "data_offset": 2048, 00:15:00.128 "data_size": 63488 00:15:00.128 }, 00:15:00.128 { 00:15:00.128 "name": "BaseBdev2", 00:15:00.128 "uuid": "324504b6-b7e6-5195-9378-db73eb45f0f2", 00:15:00.128 "is_configured": true, 00:15:00.128 "data_offset": 2048, 00:15:00.128 "data_size": 63488 00:15:00.128 }, 00:15:00.128 { 00:15:00.128 "name": "BaseBdev3", 00:15:00.128 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:00.128 "is_configured": true, 00:15:00.128 "data_offset": 2048, 00:15:00.128 "data_size": 63488 00:15:00.128 }, 00:15:00.128 { 00:15:00.128 "name": "BaseBdev4", 00:15:00.129 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:00.129 "is_configured": true, 00:15:00.129 "data_offset": 2048, 00:15:00.129 "data_size": 63488 00:15:00.129 } 00:15:00.129 ] 00:15:00.129 }' 00:15:00.129 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.129 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.129 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.129 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.129 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.129 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.129 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.129 [2024-11-17 01:35:08.470113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.129 [2024-11-17 01:35:08.566418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:00.129 [2024-11-17 01:35:08.566679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:00.129 [2024-11-17 01:35:08.572809] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.129 [2024-11-17 01:35:08.582440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.129 [2024-11-17 01:35:08.582547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.129 [2024-11-17 01:35:08.582564] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.388 [2024-11-17 01:35:08.609849] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.388 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.388 "name": "raid_bdev1", 00:15:00.388 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:00.388 "strip_size_kb": 0, 00:15:00.388 "state": "online", 00:15:00.388 "raid_level": "raid1", 00:15:00.388 "superblock": true, 00:15:00.388 "num_base_bdevs": 4, 00:15:00.388 "num_base_bdevs_discovered": 3, 00:15:00.388 "num_base_bdevs_operational": 3, 00:15:00.388 "base_bdevs_list": [ 00:15:00.388 { 00:15:00.389 "name": null, 00:15:00.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.389 "is_configured": false, 00:15:00.389 "data_offset": 0, 00:15:00.389 "data_size": 63488 00:15:00.389 }, 00:15:00.389 { 00:15:00.389 "name": "BaseBdev2", 00:15:00.389 "uuid": "324504b6-b7e6-5195-9378-db73eb45f0f2", 00:15:00.389 "is_configured": true, 00:15:00.389 "data_offset": 2048, 00:15:00.389 "data_size": 63488 00:15:00.389 }, 00:15:00.389 { 00:15:00.389 "name": "BaseBdev3", 00:15:00.389 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:00.389 "is_configured": true, 00:15:00.389 "data_offset": 2048, 00:15:00.389 "data_size": 63488 00:15:00.389 }, 00:15:00.389 { 00:15:00.389 "name": "BaseBdev4", 00:15:00.389 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:00.389 "is_configured": true, 00:15:00.389 "data_offset": 2048, 00:15:00.389 "data_size": 63488 00:15:00.389 } 00:15:00.389 ] 00:15:00.389 }' 00:15:00.389 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.389 01:35:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.648 159.50 IOPS, 478.50 MiB/s [2024-11-17T01:35:09.108Z] 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.648 "name": "raid_bdev1", 00:15:00.648 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:00.648 "strip_size_kb": 0, 00:15:00.648 "state": "online", 00:15:00.648 "raid_level": "raid1", 00:15:00.648 "superblock": true, 00:15:00.648 "num_base_bdevs": 4, 00:15:00.648 "num_base_bdevs_discovered": 3, 00:15:00.648 "num_base_bdevs_operational": 3, 00:15:00.648 "base_bdevs_list": [ 00:15:00.648 { 00:15:00.648 "name": null, 00:15:00.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.648 "is_configured": false, 00:15:00.648 "data_offset": 0, 00:15:00.648 "data_size": 63488 00:15:00.648 }, 00:15:00.648 { 00:15:00.648 "name": "BaseBdev2", 00:15:00.648 "uuid": "324504b6-b7e6-5195-9378-db73eb45f0f2", 00:15:00.648 "is_configured": true, 00:15:00.648 "data_offset": 2048, 00:15:00.648 "data_size": 63488 00:15:00.648 }, 00:15:00.648 { 00:15:00.648 "name": "BaseBdev3", 00:15:00.648 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:00.648 "is_configured": true, 00:15:00.648 "data_offset": 2048, 00:15:00.648 "data_size": 63488 00:15:00.648 }, 00:15:00.648 { 00:15:00.648 "name": "BaseBdev4", 00:15:00.648 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:00.648 "is_configured": true, 00:15:00.648 "data_offset": 2048, 00:15:00.648 "data_size": 63488 00:15:00.648 } 00:15:00.648 ] 00:15:00.648 }' 00:15:00.648 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.911 [2024-11-17 01:35:09.194640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.911 01:35:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:00.911 [2024-11-17 01:35:09.269957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:00.911 [2024-11-17 01:35:09.271884] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.171 [2024-11-17 01:35:09.394752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.171 [2024-11-17 01:35:09.396137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.171 [2024-11-17 01:35:09.606806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.171 [2024-11-17 01:35:09.607236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.430 [2024-11-17 01:35:09.847120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:01.690 160.33 IOPS, 481.00 MiB/s [2024-11-17T01:35:10.150Z] [2024-11-17 01:35:10.085160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:01.690 [2024-11-17 01:35:10.086034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:01.950 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.950 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.951 "name": "raid_bdev1", 00:15:01.951 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:01.951 "strip_size_kb": 0, 00:15:01.951 "state": "online", 00:15:01.951 "raid_level": "raid1", 00:15:01.951 "superblock": true, 00:15:01.951 "num_base_bdevs": 4, 00:15:01.951 "num_base_bdevs_discovered": 4, 00:15:01.951 "num_base_bdevs_operational": 4, 00:15:01.951 "process": { 00:15:01.951 "type": "rebuild", 00:15:01.951 "target": "spare", 00:15:01.951 "progress": { 00:15:01.951 "blocks": 10240, 00:15:01.951 "percent": 16 00:15:01.951 } 00:15:01.951 }, 00:15:01.951 "base_bdevs_list": [ 00:15:01.951 { 00:15:01.951 "name": "spare", 00:15:01.951 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:01.951 "is_configured": true, 00:15:01.951 "data_offset": 2048, 00:15:01.951 "data_size": 63488 00:15:01.951 }, 00:15:01.951 { 00:15:01.951 "name": "BaseBdev2", 00:15:01.951 "uuid": "324504b6-b7e6-5195-9378-db73eb45f0f2", 00:15:01.951 "is_configured": true, 00:15:01.951 "data_offset": 2048, 00:15:01.951 "data_size": 63488 00:15:01.951 }, 00:15:01.951 { 00:15:01.951 "name": "BaseBdev3", 00:15:01.951 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:01.951 "is_configured": true, 00:15:01.951 "data_offset": 2048, 00:15:01.951 "data_size": 63488 00:15:01.951 }, 00:15:01.951 { 00:15:01.951 "name": "BaseBdev4", 00:15:01.951 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:01.951 "is_configured": true, 00:15:01.951 "data_offset": 2048, 00:15:01.951 "data_size": 63488 00:15:01.951 } 00:15:01.951 ] 00:15:01.951 }' 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:01.951 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.951 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.951 [2024-11-17 01:35:10.376106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.210 [2024-11-17 01:35:10.568265] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:02.210 [2024-11-17 01:35:10.568351] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.210 [2024-11-17 01:35:10.577715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:02.210 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.211 "name": "raid_bdev1", 00:15:02.211 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:02.211 "strip_size_kb": 0, 00:15:02.211 "state": "online", 00:15:02.211 "raid_level": "raid1", 00:15:02.211 "superblock": true, 00:15:02.211 "num_base_bdevs": 4, 00:15:02.211 "num_base_bdevs_discovered": 3, 00:15:02.211 "num_base_bdevs_operational": 3, 00:15:02.211 "process": { 00:15:02.211 "type": "rebuild", 00:15:02.211 "target": "spare", 00:15:02.211 "progress": { 00:15:02.211 "blocks": 14336, 00:15:02.211 "percent": 22 00:15:02.211 } 00:15:02.211 }, 00:15:02.211 "base_bdevs_list": [ 00:15:02.211 { 00:15:02.211 "name": "spare", 00:15:02.211 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:02.211 "is_configured": true, 00:15:02.211 "data_offset": 2048, 00:15:02.211 "data_size": 63488 00:15:02.211 }, 00:15:02.211 { 00:15:02.211 "name": null, 00:15:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.211 "is_configured": false, 00:15:02.211 "data_offset": 0, 00:15:02.211 "data_size": 63488 00:15:02.211 }, 00:15:02.211 { 00:15:02.211 "name": "BaseBdev3", 00:15:02.211 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:02.211 "is_configured": true, 00:15:02.211 "data_offset": 2048, 00:15:02.211 "data_size": 63488 00:15:02.211 }, 00:15:02.211 { 00:15:02.211 "name": "BaseBdev4", 00:15:02.211 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:02.211 "is_configured": true, 00:15:02.211 "data_offset": 2048, 00:15:02.211 "data_size": 63488 00:15:02.211 } 00:15:02.211 ] 00:15:02.211 }' 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.211 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=484 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.471 "name": "raid_bdev1", 00:15:02.471 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:02.471 "strip_size_kb": 0, 00:15:02.471 "state": "online", 00:15:02.471 "raid_level": "raid1", 00:15:02.471 "superblock": true, 00:15:02.471 "num_base_bdevs": 4, 00:15:02.471 "num_base_bdevs_discovered": 3, 00:15:02.471 "num_base_bdevs_operational": 3, 00:15:02.471 "process": { 00:15:02.471 "type": "rebuild", 00:15:02.471 "target": "spare", 00:15:02.471 "progress": { 00:15:02.471 "blocks": 14336, 00:15:02.471 "percent": 22 00:15:02.471 } 00:15:02.471 }, 00:15:02.471 "base_bdevs_list": [ 00:15:02.471 { 00:15:02.471 "name": "spare", 00:15:02.471 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:02.471 "is_configured": true, 00:15:02.471 "data_offset": 2048, 00:15:02.471 "data_size": 63488 00:15:02.471 }, 00:15:02.471 { 00:15:02.471 "name": null, 00:15:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.471 "is_configured": false, 00:15:02.471 "data_offset": 0, 00:15:02.471 "data_size": 63488 00:15:02.471 }, 00:15:02.471 { 00:15:02.471 "name": "BaseBdev3", 00:15:02.471 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:02.471 "is_configured": true, 00:15:02.471 "data_offset": 2048, 00:15:02.471 "data_size": 63488 00:15:02.471 }, 00:15:02.471 { 00:15:02.471 "name": "BaseBdev4", 00:15:02.471 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:02.471 "is_configured": true, 00:15:02.471 "data_offset": 2048, 00:15:02.471 "data_size": 63488 00:15:02.471 } 00:15:02.471 ] 00:15:02.471 }' 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.471 [2024-11-17 01:35:10.787195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.471 01:35:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.732 135.25 IOPS, 405.75 MiB/s [2024-11-17T01:35:11.192Z] [2024-11-17 01:35:11.022589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:02.732 [2024-11-17 01:35:11.140174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:03.300 [2024-11-17 01:35:11.458145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:03.300 [2024-11-17 01:35:11.667366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:03.300 [2024-11-17 01:35:11.667847] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:03.560 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.561 "name": "raid_bdev1", 00:15:03.561 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:03.561 "strip_size_kb": 0, 00:15:03.561 "state": "online", 00:15:03.561 "raid_level": "raid1", 00:15:03.561 "superblock": true, 00:15:03.561 "num_base_bdevs": 4, 00:15:03.561 "num_base_bdevs_discovered": 3, 00:15:03.561 "num_base_bdevs_operational": 3, 00:15:03.561 "process": { 00:15:03.561 "type": "rebuild", 00:15:03.561 "target": "spare", 00:15:03.561 "progress": { 00:15:03.561 "blocks": 28672, 00:15:03.561 "percent": 45 00:15:03.561 } 00:15:03.561 }, 00:15:03.561 "base_bdevs_list": [ 00:15:03.561 { 00:15:03.561 "name": "spare", 00:15:03.561 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:03.561 "is_configured": true, 00:15:03.561 "data_offset": 2048, 00:15:03.561 "data_size": 63488 00:15:03.561 }, 00:15:03.561 { 00:15:03.561 "name": null, 00:15:03.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.561 "is_configured": false, 00:15:03.561 "data_offset": 0, 00:15:03.561 "data_size": 63488 00:15:03.561 }, 00:15:03.561 { 00:15:03.561 "name": "BaseBdev3", 00:15:03.561 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:03.561 "is_configured": true, 00:15:03.561 "data_offset": 2048, 00:15:03.561 "data_size": 63488 00:15:03.561 }, 00:15:03.561 { 00:15:03.561 "name": "BaseBdev4", 00:15:03.561 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:03.561 "is_configured": true, 00:15:03.561 "data_offset": 2048, 00:15:03.561 "data_size": 63488 00:15:03.561 } 00:15:03.561 ] 00:15:03.561 }' 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.561 117.20 IOPS, 351.60 MiB/s [2024-11-17T01:35:12.021Z] 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.561 01:35:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.821 [2024-11-17 01:35:12.020122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:03.821 [2024-11-17 01:35:12.128503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:04.080 [2024-11-17 01:35:12.337070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:04.080 [2024-11-17 01:35:12.337465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:04.080 [2024-11-17 01:35:12.454625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:04.340 [2024-11-17 01:35:12.795954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:04.599 [2024-11-17 01:35:12.915607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:04.599 105.67 IOPS, 317.00 MiB/s [2024-11-17T01:35:13.059Z] 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.599 01:35:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.599 01:35:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.599 01:35:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.599 "name": "raid_bdev1", 00:15:04.599 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:04.599 "strip_size_kb": 0, 00:15:04.599 "state": "online", 00:15:04.599 "raid_level": "raid1", 00:15:04.599 "superblock": true, 00:15:04.599 "num_base_bdevs": 4, 00:15:04.599 "num_base_bdevs_discovered": 3, 00:15:04.599 "num_base_bdevs_operational": 3, 00:15:04.599 "process": { 00:15:04.599 "type": "rebuild", 00:15:04.599 "target": "spare", 00:15:04.599 "progress": { 00:15:04.599 "blocks": 47104, 00:15:04.599 "percent": 74 00:15:04.599 } 00:15:04.599 }, 00:15:04.599 "base_bdevs_list": [ 00:15:04.599 { 00:15:04.599 "name": "spare", 00:15:04.599 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:04.599 "is_configured": true, 00:15:04.599 "data_offset": 2048, 00:15:04.599 "data_size": 63488 00:15:04.599 }, 00:15:04.599 { 00:15:04.599 "name": null, 00:15:04.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.599 "is_configured": false, 00:15:04.599 "data_offset": 0, 00:15:04.599 "data_size": 63488 00:15:04.599 }, 00:15:04.599 { 00:15:04.599 "name": "BaseBdev3", 00:15:04.599 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:04.599 "is_configured": true, 00:15:04.599 "data_offset": 2048, 00:15:04.599 "data_size": 63488 00:15:04.599 }, 00:15:04.599 { 00:15:04.599 "name": "BaseBdev4", 00:15:04.599 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:04.599 "is_configured": true, 00:15:04.599 "data_offset": 2048, 00:15:04.599 "data_size": 63488 00:15:04.599 } 00:15:04.599 ] 00:15:04.599 }' 00:15:04.599 01:35:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.859 01:35:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.859 01:35:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.859 01:35:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.859 01:35:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.859 [2024-11-17 01:35:13.227111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:05.118 [2024-11-17 01:35:13.350603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:05.118 [2024-11-17 01:35:13.351047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:05.687 94.43 IOPS, 283.29 MiB/s [2024-11-17T01:35:14.147Z] [2024-11-17 01:35:14.012948] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:05.687 [2024-11-17 01:35:14.117754] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:05.688 [2024-11-17 01:35:14.120119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.688 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.688 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.688 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.688 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.688 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.688 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.947 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.947 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.947 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.947 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.947 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.947 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.948 "name": "raid_bdev1", 00:15:05.948 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:05.948 "strip_size_kb": 0, 00:15:05.948 "state": "online", 00:15:05.948 "raid_level": "raid1", 00:15:05.948 "superblock": true, 00:15:05.948 "num_base_bdevs": 4, 00:15:05.948 "num_base_bdevs_discovered": 3, 00:15:05.948 "num_base_bdevs_operational": 3, 00:15:05.948 "base_bdevs_list": [ 00:15:05.948 { 00:15:05.948 "name": "spare", 00:15:05.948 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:05.948 "is_configured": true, 00:15:05.948 "data_offset": 2048, 00:15:05.948 "data_size": 63488 00:15:05.948 }, 00:15:05.948 { 00:15:05.948 "name": null, 00:15:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.948 "is_configured": false, 00:15:05.948 "data_offset": 0, 00:15:05.948 "data_size": 63488 00:15:05.948 }, 00:15:05.948 { 00:15:05.948 "name": "BaseBdev3", 00:15:05.948 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:05.948 "is_configured": true, 00:15:05.948 "data_offset": 2048, 00:15:05.948 "data_size": 63488 00:15:05.948 }, 00:15:05.948 { 00:15:05.948 "name": "BaseBdev4", 00:15:05.948 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:05.948 "is_configured": true, 00:15:05.948 "data_offset": 2048, 00:15:05.948 "data_size": 63488 00:15:05.948 } 00:15:05.948 ] 00:15:05.948 }' 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.948 "name": "raid_bdev1", 00:15:05.948 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:05.948 "strip_size_kb": 0, 00:15:05.948 "state": "online", 00:15:05.948 "raid_level": "raid1", 00:15:05.948 "superblock": true, 00:15:05.948 "num_base_bdevs": 4, 00:15:05.948 "num_base_bdevs_discovered": 3, 00:15:05.948 "num_base_bdevs_operational": 3, 00:15:05.948 "base_bdevs_list": [ 00:15:05.948 { 00:15:05.948 "name": "spare", 00:15:05.948 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:05.948 "is_configured": true, 00:15:05.948 "data_offset": 2048, 00:15:05.948 "data_size": 63488 00:15:05.948 }, 00:15:05.948 { 00:15:05.948 "name": null, 00:15:05.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.948 "is_configured": false, 00:15:05.948 "data_offset": 0, 00:15:05.948 "data_size": 63488 00:15:05.948 }, 00:15:05.948 { 00:15:05.948 "name": "BaseBdev3", 00:15:05.948 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:05.948 "is_configured": true, 00:15:05.948 "data_offset": 2048, 00:15:05.948 "data_size": 63488 00:15:05.948 }, 00:15:05.948 { 00:15:05.948 "name": "BaseBdev4", 00:15:05.948 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:05.948 "is_configured": true, 00:15:05.948 "data_offset": 2048, 00:15:05.948 "data_size": 63488 00:15:05.948 } 00:15:05.948 ] 00:15:05.948 }' 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.948 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.207 "name": "raid_bdev1", 00:15:06.207 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:06.207 "strip_size_kb": 0, 00:15:06.207 "state": "online", 00:15:06.207 "raid_level": "raid1", 00:15:06.207 "superblock": true, 00:15:06.207 "num_base_bdevs": 4, 00:15:06.207 "num_base_bdevs_discovered": 3, 00:15:06.207 "num_base_bdevs_operational": 3, 00:15:06.207 "base_bdevs_list": [ 00:15:06.207 { 00:15:06.207 "name": "spare", 00:15:06.207 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:06.207 "is_configured": true, 00:15:06.207 "data_offset": 2048, 00:15:06.207 "data_size": 63488 00:15:06.207 }, 00:15:06.207 { 00:15:06.207 "name": null, 00:15:06.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.207 "is_configured": false, 00:15:06.207 "data_offset": 0, 00:15:06.207 "data_size": 63488 00:15:06.207 }, 00:15:06.207 { 00:15:06.207 "name": "BaseBdev3", 00:15:06.207 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:06.207 "is_configured": true, 00:15:06.207 "data_offset": 2048, 00:15:06.207 "data_size": 63488 00:15:06.207 }, 00:15:06.207 { 00:15:06.207 "name": "BaseBdev4", 00:15:06.207 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:06.207 "is_configured": true, 00:15:06.207 "data_offset": 2048, 00:15:06.207 "data_size": 63488 00:15:06.207 } 00:15:06.207 ] 00:15:06.207 }' 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.207 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.466 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.466 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.466 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.466 [2024-11-17 01:35:14.837981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.466 [2024-11-17 01:35:14.838012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.466 00:15:06.466 Latency(us) 00:15:06.466 [2024-11-17T01:35:14.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.466 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:06.466 raid_bdev1 : 7.98 87.14 261.43 0.00 0.00 14726.25 302.28 116304.94 00:15:06.466 [2024-11-17T01:35:14.926Z] =================================================================================================================== 00:15:06.466 [2024-11-17T01:35:14.926Z] Total : 87.14 261.43 0.00 0.00 14726.25 302.28 116304.94 00:15:06.466 [2024-11-17 01:35:14.894437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.466 [2024-11-17 01:35:14.894478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.466 [2024-11-17 01:35:14.894574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.466 [2024-11-17 01:35:14.894584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:06.466 { 00:15:06.466 "results": [ 00:15:06.466 { 00:15:06.466 "job": "raid_bdev1", 00:15:06.466 "core_mask": "0x1", 00:15:06.466 "workload": "randrw", 00:15:06.466 "percentage": 50, 00:15:06.466 "status": "finished", 00:15:06.466 "queue_depth": 2, 00:15:06.466 "io_size": 3145728, 00:15:06.466 "runtime": 7.975332, 00:15:06.466 "iops": 87.14370762245383, 00:15:06.466 "mibps": 261.4311228673615, 00:15:06.466 "io_failed": 0, 00:15:06.466 "io_timeout": 0, 00:15:06.466 "avg_latency_us": 14726.249163394175, 00:15:06.466 "min_latency_us": 302.2812227074236, 00:15:06.466 "max_latency_us": 116304.93624454149 00:15:06.466 } 00:15:06.466 ], 00:15:06.466 "core_count": 1 00:15:06.467 } 00:15:06.467 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.467 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.467 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:06.467 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.467 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.467 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:06.762 01:35:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:06.762 /dev/nbd0 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.762 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.762 1+0 records in 00:15:06.763 1+0 records out 00:15:06.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376141 s, 10.9 MB/s 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:07.029 /dev/nbd1 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.029 1+0 records in 00:15:07.029 1+0 records out 00:15:07.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456715 s, 9.0 MB/s 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:07.029 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.030 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.030 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:07.030 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.030 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.030 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:07.290 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:07.290 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.290 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:07.290 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.290 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:07.290 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.290 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.550 01:35:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:07.809 /dev/nbd1 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:07.809 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.810 1+0 records in 00:15:07.810 1+0 records out 00:15:07.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229403 s, 17.9 MB/s 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.810 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.070 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.330 [2024-11-17 01:35:16.647394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.330 [2024-11-17 01:35:16.647466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.330 [2024-11-17 01:35:16.647490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:08.330 [2024-11-17 01:35:16.647499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.330 [2024-11-17 01:35:16.649621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.330 [2024-11-17 01:35:16.649661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.330 [2024-11-17 01:35:16.649749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:08.330 [2024-11-17 01:35:16.649836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.330 [2024-11-17 01:35:16.649975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.330 [2024-11-17 01:35:16.650082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.330 spare 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.330 [2024-11-17 01:35:16.749976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:08.330 [2024-11-17 01:35:16.750004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:08.330 [2024-11-17 01:35:16.750264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:08.330 [2024-11-17 01:35:16.750417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:08.330 [2024-11-17 01:35:16.750436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:08.330 [2024-11-17 01:35:16.750615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.330 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.590 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.590 "name": "raid_bdev1", 00:15:08.590 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:08.590 "strip_size_kb": 0, 00:15:08.590 "state": "online", 00:15:08.590 "raid_level": "raid1", 00:15:08.590 "superblock": true, 00:15:08.590 "num_base_bdevs": 4, 00:15:08.590 "num_base_bdevs_discovered": 3, 00:15:08.590 "num_base_bdevs_operational": 3, 00:15:08.590 "base_bdevs_list": [ 00:15:08.590 { 00:15:08.590 "name": "spare", 00:15:08.590 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 2048, 00:15:08.590 "data_size": 63488 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": null, 00:15:08.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.590 "is_configured": false, 00:15:08.590 "data_offset": 2048, 00:15:08.590 "data_size": 63488 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": "BaseBdev3", 00:15:08.590 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 2048, 00:15:08.590 "data_size": 63488 00:15:08.590 }, 00:15:08.590 { 00:15:08.590 "name": "BaseBdev4", 00:15:08.590 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:08.590 "is_configured": true, 00:15:08.590 "data_offset": 2048, 00:15:08.590 "data_size": 63488 00:15:08.590 } 00:15:08.590 ] 00:15:08.590 }' 00:15:08.590 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.590 01:35:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.850 "name": "raid_bdev1", 00:15:08.850 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:08.850 "strip_size_kb": 0, 00:15:08.850 "state": "online", 00:15:08.850 "raid_level": "raid1", 00:15:08.850 "superblock": true, 00:15:08.850 "num_base_bdevs": 4, 00:15:08.850 "num_base_bdevs_discovered": 3, 00:15:08.850 "num_base_bdevs_operational": 3, 00:15:08.850 "base_bdevs_list": [ 00:15:08.850 { 00:15:08.850 "name": "spare", 00:15:08.850 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:08.850 "is_configured": true, 00:15:08.850 "data_offset": 2048, 00:15:08.850 "data_size": 63488 00:15:08.850 }, 00:15:08.850 { 00:15:08.850 "name": null, 00:15:08.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.850 "is_configured": false, 00:15:08.850 "data_offset": 2048, 00:15:08.850 "data_size": 63488 00:15:08.850 }, 00:15:08.850 { 00:15:08.850 "name": "BaseBdev3", 00:15:08.850 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:08.850 "is_configured": true, 00:15:08.850 "data_offset": 2048, 00:15:08.850 "data_size": 63488 00:15:08.850 }, 00:15:08.850 { 00:15:08.850 "name": "BaseBdev4", 00:15:08.850 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:08.850 "is_configured": true, 00:15:08.850 "data_offset": 2048, 00:15:08.850 "data_size": 63488 00:15:08.850 } 00:15:08.850 ] 00:15:08.850 }' 00:15:08.850 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.110 [2024-11-17 01:35:17.430303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.110 "name": "raid_bdev1", 00:15:09.110 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:09.110 "strip_size_kb": 0, 00:15:09.110 "state": "online", 00:15:09.110 "raid_level": "raid1", 00:15:09.110 "superblock": true, 00:15:09.110 "num_base_bdevs": 4, 00:15:09.110 "num_base_bdevs_discovered": 2, 00:15:09.110 "num_base_bdevs_operational": 2, 00:15:09.110 "base_bdevs_list": [ 00:15:09.110 { 00:15:09.110 "name": null, 00:15:09.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.110 "is_configured": false, 00:15:09.110 "data_offset": 0, 00:15:09.110 "data_size": 63488 00:15:09.110 }, 00:15:09.110 { 00:15:09.110 "name": null, 00:15:09.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.110 "is_configured": false, 00:15:09.110 "data_offset": 2048, 00:15:09.110 "data_size": 63488 00:15:09.110 }, 00:15:09.110 { 00:15:09.110 "name": "BaseBdev3", 00:15:09.110 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:09.110 "is_configured": true, 00:15:09.110 "data_offset": 2048, 00:15:09.110 "data_size": 63488 00:15:09.110 }, 00:15:09.110 { 00:15:09.110 "name": "BaseBdev4", 00:15:09.110 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:09.110 "is_configured": true, 00:15:09.110 "data_offset": 2048, 00:15:09.110 "data_size": 63488 00:15:09.110 } 00:15:09.110 ] 00:15:09.110 }' 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.110 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.679 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.679 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.679 [2024-11-17 01:35:17.929564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.679 [2024-11-17 01:35:17.929788] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:09.679 [2024-11-17 01:35:17.929806] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:09.679 [2024-11-17 01:35:17.929849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.679 [2024-11-17 01:35:17.944999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:09.679 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.679 01:35:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:09.679 [2024-11-17 01:35:17.946903] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.618 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.619 01:35:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.619 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.619 "name": "raid_bdev1", 00:15:10.619 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:10.619 "strip_size_kb": 0, 00:15:10.619 "state": "online", 00:15:10.619 "raid_level": "raid1", 00:15:10.619 "superblock": true, 00:15:10.619 "num_base_bdevs": 4, 00:15:10.619 "num_base_bdevs_discovered": 3, 00:15:10.619 "num_base_bdevs_operational": 3, 00:15:10.619 "process": { 00:15:10.619 "type": "rebuild", 00:15:10.619 "target": "spare", 00:15:10.619 "progress": { 00:15:10.619 "blocks": 20480, 00:15:10.619 "percent": 32 00:15:10.619 } 00:15:10.619 }, 00:15:10.619 "base_bdevs_list": [ 00:15:10.619 { 00:15:10.619 "name": "spare", 00:15:10.619 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:10.619 "is_configured": true, 00:15:10.619 "data_offset": 2048, 00:15:10.619 "data_size": 63488 00:15:10.619 }, 00:15:10.619 { 00:15:10.619 "name": null, 00:15:10.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.619 "is_configured": false, 00:15:10.619 "data_offset": 2048, 00:15:10.619 "data_size": 63488 00:15:10.619 }, 00:15:10.619 { 00:15:10.619 "name": "BaseBdev3", 00:15:10.619 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:10.619 "is_configured": true, 00:15:10.619 "data_offset": 2048, 00:15:10.619 "data_size": 63488 00:15:10.619 }, 00:15:10.619 { 00:15:10.619 "name": "BaseBdev4", 00:15:10.619 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:10.619 "is_configured": true, 00:15:10.619 "data_offset": 2048, 00:15:10.619 "data_size": 63488 00:15:10.619 } 00:15:10.619 ] 00:15:10.619 }' 00:15:10.619 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.619 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.619 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.878 [2024-11-17 01:35:19.099331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.878 [2024-11-17 01:35:19.152296] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:10.878 [2024-11-17 01:35:19.152369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.878 [2024-11-17 01:35:19.152388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:10.878 [2024-11-17 01:35:19.152395] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.878 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.879 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.879 "name": "raid_bdev1", 00:15:10.879 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:10.879 "strip_size_kb": 0, 00:15:10.879 "state": "online", 00:15:10.879 "raid_level": "raid1", 00:15:10.879 "superblock": true, 00:15:10.879 "num_base_bdevs": 4, 00:15:10.879 "num_base_bdevs_discovered": 2, 00:15:10.879 "num_base_bdevs_operational": 2, 00:15:10.879 "base_bdevs_list": [ 00:15:10.879 { 00:15:10.879 "name": null, 00:15:10.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.879 "is_configured": false, 00:15:10.879 "data_offset": 0, 00:15:10.879 "data_size": 63488 00:15:10.879 }, 00:15:10.879 { 00:15:10.879 "name": null, 00:15:10.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.879 "is_configured": false, 00:15:10.879 "data_offset": 2048, 00:15:10.879 "data_size": 63488 00:15:10.879 }, 00:15:10.879 { 00:15:10.879 "name": "BaseBdev3", 00:15:10.879 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:10.879 "is_configured": true, 00:15:10.879 "data_offset": 2048, 00:15:10.879 "data_size": 63488 00:15:10.879 }, 00:15:10.879 { 00:15:10.880 "name": "BaseBdev4", 00:15:10.880 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:10.880 "is_configured": true, 00:15:10.880 "data_offset": 2048, 00:15:10.880 "data_size": 63488 00:15:10.880 } 00:15:10.880 ] 00:15:10.880 }' 00:15:10.880 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.880 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.452 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:11.452 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.452 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.452 [2024-11-17 01:35:19.636464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:11.452 [2024-11-17 01:35:19.636541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.452 [2024-11-17 01:35:19.636568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:11.452 [2024-11-17 01:35:19.636577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.452 [2024-11-17 01:35:19.637061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.452 [2024-11-17 01:35:19.637094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:11.452 [2024-11-17 01:35:19.637192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:11.452 [2024-11-17 01:35:19.637209] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:11.452 [2024-11-17 01:35:19.637222] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:11.452 [2024-11-17 01:35:19.637246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.452 [2024-11-17 01:35:19.651145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:11.452 spare 00:15:11.452 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.452 [2024-11-17 01:35:19.652950] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.452 01:35:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.423 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.423 "name": "raid_bdev1", 00:15:12.423 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:12.423 "strip_size_kb": 0, 00:15:12.423 "state": "online", 00:15:12.423 "raid_level": "raid1", 00:15:12.423 "superblock": true, 00:15:12.423 "num_base_bdevs": 4, 00:15:12.423 "num_base_bdevs_discovered": 3, 00:15:12.423 "num_base_bdevs_operational": 3, 00:15:12.423 "process": { 00:15:12.423 "type": "rebuild", 00:15:12.423 "target": "spare", 00:15:12.423 "progress": { 00:15:12.423 "blocks": 20480, 00:15:12.424 "percent": 32 00:15:12.424 } 00:15:12.424 }, 00:15:12.424 "base_bdevs_list": [ 00:15:12.424 { 00:15:12.424 "name": "spare", 00:15:12.424 "uuid": "d826fe94-f937-57f6-a3a7-2437fc328acc", 00:15:12.424 "is_configured": true, 00:15:12.424 "data_offset": 2048, 00:15:12.424 "data_size": 63488 00:15:12.424 }, 00:15:12.424 { 00:15:12.424 "name": null, 00:15:12.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.424 "is_configured": false, 00:15:12.424 "data_offset": 2048, 00:15:12.424 "data_size": 63488 00:15:12.424 }, 00:15:12.424 { 00:15:12.424 "name": "BaseBdev3", 00:15:12.424 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:12.424 "is_configured": true, 00:15:12.424 "data_offset": 2048, 00:15:12.424 "data_size": 63488 00:15:12.424 }, 00:15:12.424 { 00:15:12.424 "name": "BaseBdev4", 00:15:12.424 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:12.424 "is_configured": true, 00:15:12.424 "data_offset": 2048, 00:15:12.424 "data_size": 63488 00:15:12.424 } 00:15:12.424 ] 00:15:12.424 }' 00:15:12.424 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.424 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.424 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.424 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.424 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.424 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.424 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.424 [2024-11-17 01:35:20.816693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.424 [2024-11-17 01:35:20.857598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.424 [2024-11-17 01:35:20.857687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.424 [2024-11-17 01:35:20.857706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.424 [2024-11-17 01:35:20.857715] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.683 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.683 "name": "raid_bdev1", 00:15:12.683 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:12.683 "strip_size_kb": 0, 00:15:12.683 "state": "online", 00:15:12.683 "raid_level": "raid1", 00:15:12.683 "superblock": true, 00:15:12.683 "num_base_bdevs": 4, 00:15:12.683 "num_base_bdevs_discovered": 2, 00:15:12.683 "num_base_bdevs_operational": 2, 00:15:12.683 "base_bdevs_list": [ 00:15:12.683 { 00:15:12.683 "name": null, 00:15:12.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.683 "is_configured": false, 00:15:12.683 "data_offset": 0, 00:15:12.683 "data_size": 63488 00:15:12.683 }, 00:15:12.683 { 00:15:12.683 "name": null, 00:15:12.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.683 "is_configured": false, 00:15:12.683 "data_offset": 2048, 00:15:12.683 "data_size": 63488 00:15:12.683 }, 00:15:12.683 { 00:15:12.683 "name": "BaseBdev3", 00:15:12.683 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:12.683 "is_configured": true, 00:15:12.683 "data_offset": 2048, 00:15:12.683 "data_size": 63488 00:15:12.683 }, 00:15:12.683 { 00:15:12.683 "name": "BaseBdev4", 00:15:12.683 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:12.683 "is_configured": true, 00:15:12.683 "data_offset": 2048, 00:15:12.683 "data_size": 63488 00:15:12.683 } 00:15:12.683 ] 00:15:12.683 }' 00:15:12.684 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.684 01:35:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.943 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.204 "name": "raid_bdev1", 00:15:13.204 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:13.204 "strip_size_kb": 0, 00:15:13.204 "state": "online", 00:15:13.204 "raid_level": "raid1", 00:15:13.204 "superblock": true, 00:15:13.204 "num_base_bdevs": 4, 00:15:13.204 "num_base_bdevs_discovered": 2, 00:15:13.204 "num_base_bdevs_operational": 2, 00:15:13.204 "base_bdevs_list": [ 00:15:13.204 { 00:15:13.204 "name": null, 00:15:13.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.204 "is_configured": false, 00:15:13.204 "data_offset": 0, 00:15:13.204 "data_size": 63488 00:15:13.204 }, 00:15:13.204 { 00:15:13.204 "name": null, 00:15:13.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.204 "is_configured": false, 00:15:13.204 "data_offset": 2048, 00:15:13.204 "data_size": 63488 00:15:13.204 }, 00:15:13.204 { 00:15:13.204 "name": "BaseBdev3", 00:15:13.204 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:13.204 "is_configured": true, 00:15:13.204 "data_offset": 2048, 00:15:13.204 "data_size": 63488 00:15:13.204 }, 00:15:13.204 { 00:15:13.204 "name": "BaseBdev4", 00:15:13.204 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:13.204 "is_configured": true, 00:15:13.204 "data_offset": 2048, 00:15:13.204 "data_size": 63488 00:15:13.204 } 00:15:13.204 ] 00:15:13.204 }' 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.204 [2024-11-17 01:35:21.507978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:13.204 [2024-11-17 01:35:21.508039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.204 [2024-11-17 01:35:21.508057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:13.204 [2024-11-17 01:35:21.508068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.204 [2024-11-17 01:35:21.508494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.204 [2024-11-17 01:35:21.508541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.204 [2024-11-17 01:35:21.508621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:13.204 [2024-11-17 01:35:21.508640] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:13.204 [2024-11-17 01:35:21.508647] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:13.204 [2024-11-17 01:35:21.508658] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:13.204 BaseBdev1 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.204 01:35:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.145 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.145 "name": "raid_bdev1", 00:15:14.145 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:14.145 "strip_size_kb": 0, 00:15:14.145 "state": "online", 00:15:14.145 "raid_level": "raid1", 00:15:14.145 "superblock": true, 00:15:14.146 "num_base_bdevs": 4, 00:15:14.146 "num_base_bdevs_discovered": 2, 00:15:14.146 "num_base_bdevs_operational": 2, 00:15:14.146 "base_bdevs_list": [ 00:15:14.146 { 00:15:14.146 "name": null, 00:15:14.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.146 "is_configured": false, 00:15:14.146 "data_offset": 0, 00:15:14.146 "data_size": 63488 00:15:14.146 }, 00:15:14.146 { 00:15:14.146 "name": null, 00:15:14.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.146 "is_configured": false, 00:15:14.146 "data_offset": 2048, 00:15:14.146 "data_size": 63488 00:15:14.146 }, 00:15:14.146 { 00:15:14.146 "name": "BaseBdev3", 00:15:14.146 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:14.146 "is_configured": true, 00:15:14.146 "data_offset": 2048, 00:15:14.146 "data_size": 63488 00:15:14.146 }, 00:15:14.146 { 00:15:14.146 "name": "BaseBdev4", 00:15:14.146 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:14.146 "is_configured": true, 00:15:14.146 "data_offset": 2048, 00:15:14.146 "data_size": 63488 00:15:14.146 } 00:15:14.146 ] 00:15:14.146 }' 00:15:14.146 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.146 01:35:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.715 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.715 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.716 "name": "raid_bdev1", 00:15:14.716 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:14.716 "strip_size_kb": 0, 00:15:14.716 "state": "online", 00:15:14.716 "raid_level": "raid1", 00:15:14.716 "superblock": true, 00:15:14.716 "num_base_bdevs": 4, 00:15:14.716 "num_base_bdevs_discovered": 2, 00:15:14.716 "num_base_bdevs_operational": 2, 00:15:14.716 "base_bdevs_list": [ 00:15:14.716 { 00:15:14.716 "name": null, 00:15:14.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.716 "is_configured": false, 00:15:14.716 "data_offset": 0, 00:15:14.716 "data_size": 63488 00:15:14.716 }, 00:15:14.716 { 00:15:14.716 "name": null, 00:15:14.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.716 "is_configured": false, 00:15:14.716 "data_offset": 2048, 00:15:14.716 "data_size": 63488 00:15:14.716 }, 00:15:14.716 { 00:15:14.716 "name": "BaseBdev3", 00:15:14.716 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:14.716 "is_configured": true, 00:15:14.716 "data_offset": 2048, 00:15:14.716 "data_size": 63488 00:15:14.716 }, 00:15:14.716 { 00:15:14.716 "name": "BaseBdev4", 00:15:14.716 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:14.716 "is_configured": true, 00:15:14.716 "data_offset": 2048, 00:15:14.716 "data_size": 63488 00:15:14.716 } 00:15:14.716 ] 00:15:14.716 }' 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.716 [2024-11-17 01:35:23.113528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.716 [2024-11-17 01:35:23.113699] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:14.716 [2024-11-17 01:35:23.113711] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.716 request: 00:15:14.716 { 00:15:14.716 "base_bdev": "BaseBdev1", 00:15:14.716 "raid_bdev": "raid_bdev1", 00:15:14.716 "method": "bdev_raid_add_base_bdev", 00:15:14.716 "req_id": 1 00:15:14.716 } 00:15:14.716 Got JSON-RPC error response 00:15:14.716 response: 00:15:14.716 { 00:15:14.716 "code": -22, 00:15:14.716 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:14.716 } 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.716 01:35:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.129 "name": "raid_bdev1", 00:15:16.129 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:16.129 "strip_size_kb": 0, 00:15:16.129 "state": "online", 00:15:16.129 "raid_level": "raid1", 00:15:16.129 "superblock": true, 00:15:16.129 "num_base_bdevs": 4, 00:15:16.129 "num_base_bdevs_discovered": 2, 00:15:16.129 "num_base_bdevs_operational": 2, 00:15:16.129 "base_bdevs_list": [ 00:15:16.129 { 00:15:16.129 "name": null, 00:15:16.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.129 "is_configured": false, 00:15:16.129 "data_offset": 0, 00:15:16.129 "data_size": 63488 00:15:16.129 }, 00:15:16.129 { 00:15:16.129 "name": null, 00:15:16.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.129 "is_configured": false, 00:15:16.129 "data_offset": 2048, 00:15:16.129 "data_size": 63488 00:15:16.129 }, 00:15:16.129 { 00:15:16.129 "name": "BaseBdev3", 00:15:16.129 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:16.129 "is_configured": true, 00:15:16.129 "data_offset": 2048, 00:15:16.129 "data_size": 63488 00:15:16.129 }, 00:15:16.129 { 00:15:16.129 "name": "BaseBdev4", 00:15:16.129 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:16.129 "is_configured": true, 00:15:16.129 "data_offset": 2048, 00:15:16.129 "data_size": 63488 00:15:16.129 } 00:15:16.129 ] 00:15:16.129 }' 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.390 "name": "raid_bdev1", 00:15:16.390 "uuid": "0abe0b2c-988a-44ed-aa43-e08b4b6a3d4c", 00:15:16.390 "strip_size_kb": 0, 00:15:16.390 "state": "online", 00:15:16.390 "raid_level": "raid1", 00:15:16.390 "superblock": true, 00:15:16.390 "num_base_bdevs": 4, 00:15:16.390 "num_base_bdevs_discovered": 2, 00:15:16.390 "num_base_bdevs_operational": 2, 00:15:16.390 "base_bdevs_list": [ 00:15:16.390 { 00:15:16.390 "name": null, 00:15:16.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.390 "is_configured": false, 00:15:16.390 "data_offset": 0, 00:15:16.390 "data_size": 63488 00:15:16.390 }, 00:15:16.390 { 00:15:16.390 "name": null, 00:15:16.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.390 "is_configured": false, 00:15:16.390 "data_offset": 2048, 00:15:16.390 "data_size": 63488 00:15:16.390 }, 00:15:16.390 { 00:15:16.390 "name": "BaseBdev3", 00:15:16.390 "uuid": "662dff01-d156-5f99-a882-3b6008aac57e", 00:15:16.390 "is_configured": true, 00:15:16.390 "data_offset": 2048, 00:15:16.390 "data_size": 63488 00:15:16.390 }, 00:15:16.390 { 00:15:16.390 "name": "BaseBdev4", 00:15:16.390 "uuid": "4572e9b1-acba-5905-9df0-2f9024f199e1", 00:15:16.390 "is_configured": true, 00:15:16.390 "data_offset": 2048, 00:15:16.390 "data_size": 63488 00:15:16.390 } 00:15:16.390 ] 00:15:16.390 }' 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78911 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78911 ']' 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78911 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78911 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78911' 00:15:16.390 killing process with pid 78911 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78911 00:15:16.390 Received shutdown signal, test time was about 17.842663 seconds 00:15:16.390 00:15:16.390 Latency(us) 00:15:16.390 [2024-11-17T01:35:24.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.390 [2024-11-17T01:35:24.850Z] =================================================================================================================== 00:15:16.390 [2024-11-17T01:35:24.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.390 [2024-11-17 01:35:24.722185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.390 [2024-11-17 01:35:24.722310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.390 01:35:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78911 00:15:16.390 [2024-11-17 01:35:24.722389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.390 [2024-11-17 01:35:24.722398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:16.961 [2024-11-17 01:35:25.115899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.902 01:35:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:17.902 00:15:17.902 real 0m21.136s 00:15:17.902 user 0m27.674s 00:15:17.902 sys 0m2.676s 00:15:17.902 01:35:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.902 01:35:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.902 ************************************ 00:15:17.902 END TEST raid_rebuild_test_sb_io 00:15:17.902 ************************************ 00:15:17.902 01:35:26 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:17.902 01:35:26 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:17.902 01:35:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:17.902 01:35:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.902 01:35:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.902 ************************************ 00:15:17.902 START TEST raid5f_state_function_test 00:15:17.902 ************************************ 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:17.902 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79628 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:17.903 Process raid pid: 79628 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79628' 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79628 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79628 ']' 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.903 01:35:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.163 [2024-11-17 01:35:26.374440] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:18.163 [2024-11-17 01:35:26.374555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.163 [2024-11-17 01:35:26.551110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.423 [2024-11-17 01:35:26.656451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.423 [2024-11-17 01:35:26.853372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.423 [2024-11-17 01:35:26.853412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.993 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.993 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:18.993 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:18.993 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.993 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.993 [2024-11-17 01:35:27.196365] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.993 [2024-11-17 01:35:27.196416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.993 [2024-11-17 01:35:27.196426] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.994 [2024-11-17 01:35:27.196436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.994 [2024-11-17 01:35:27.196442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.994 [2024-11-17 01:35:27.196450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.994 "name": "Existed_Raid", 00:15:18.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.994 "strip_size_kb": 64, 00:15:18.994 "state": "configuring", 00:15:18.994 "raid_level": "raid5f", 00:15:18.994 "superblock": false, 00:15:18.994 "num_base_bdevs": 3, 00:15:18.994 "num_base_bdevs_discovered": 0, 00:15:18.994 "num_base_bdevs_operational": 3, 00:15:18.994 "base_bdevs_list": [ 00:15:18.994 { 00:15:18.994 "name": "BaseBdev1", 00:15:18.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.994 "is_configured": false, 00:15:18.994 "data_offset": 0, 00:15:18.994 "data_size": 0 00:15:18.994 }, 00:15:18.994 { 00:15:18.994 "name": "BaseBdev2", 00:15:18.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.994 "is_configured": false, 00:15:18.994 "data_offset": 0, 00:15:18.994 "data_size": 0 00:15:18.994 }, 00:15:18.994 { 00:15:18.994 "name": "BaseBdev3", 00:15:18.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.994 "is_configured": false, 00:15:18.994 "data_offset": 0, 00:15:18.994 "data_size": 0 00:15:18.994 } 00:15:18.994 ] 00:15:18.994 }' 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.994 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.255 [2024-11-17 01:35:27.619612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.255 [2024-11-17 01:35:27.619652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.255 [2024-11-17 01:35:27.631578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.255 [2024-11-17 01:35:27.631640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.255 [2024-11-17 01:35:27.631650] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.255 [2024-11-17 01:35:27.631659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.255 [2024-11-17 01:35:27.631665] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.255 [2024-11-17 01:35:27.631674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.255 [2024-11-17 01:35:27.677340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.255 BaseBdev1 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.255 [ 00:15:19.255 { 00:15:19.255 "name": "BaseBdev1", 00:15:19.255 "aliases": [ 00:15:19.255 "5ce7948f-20c7-477f-8feb-d8e668c0497b" 00:15:19.255 ], 00:15:19.255 "product_name": "Malloc disk", 00:15:19.255 "block_size": 512, 00:15:19.255 "num_blocks": 65536, 00:15:19.255 "uuid": "5ce7948f-20c7-477f-8feb-d8e668c0497b", 00:15:19.255 "assigned_rate_limits": { 00:15:19.255 "rw_ios_per_sec": 0, 00:15:19.255 "rw_mbytes_per_sec": 0, 00:15:19.255 "r_mbytes_per_sec": 0, 00:15:19.255 "w_mbytes_per_sec": 0 00:15:19.255 }, 00:15:19.255 "claimed": true, 00:15:19.255 "claim_type": "exclusive_write", 00:15:19.255 "zoned": false, 00:15:19.255 "supported_io_types": { 00:15:19.255 "read": true, 00:15:19.255 "write": true, 00:15:19.255 "unmap": true, 00:15:19.255 "flush": true, 00:15:19.255 "reset": true, 00:15:19.255 "nvme_admin": false, 00:15:19.255 "nvme_io": false, 00:15:19.255 "nvme_io_md": false, 00:15:19.255 "write_zeroes": true, 00:15:19.255 "zcopy": true, 00:15:19.255 "get_zone_info": false, 00:15:19.255 "zone_management": false, 00:15:19.255 "zone_append": false, 00:15:19.255 "compare": false, 00:15:19.255 "compare_and_write": false, 00:15:19.255 "abort": true, 00:15:19.255 "seek_hole": false, 00:15:19.255 "seek_data": false, 00:15:19.255 "copy": true, 00:15:19.255 "nvme_iov_md": false 00:15:19.255 }, 00:15:19.255 "memory_domains": [ 00:15:19.255 { 00:15:19.255 "dma_device_id": "system", 00:15:19.255 "dma_device_type": 1 00:15:19.255 }, 00:15:19.255 { 00:15:19.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.255 "dma_device_type": 2 00:15:19.255 } 00:15:19.255 ], 00:15:19.255 "driver_specific": {} 00:15:19.255 } 00:15:19.255 ] 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.255 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.515 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.515 "name": "Existed_Raid", 00:15:19.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.516 "strip_size_kb": 64, 00:15:19.516 "state": "configuring", 00:15:19.516 "raid_level": "raid5f", 00:15:19.516 "superblock": false, 00:15:19.516 "num_base_bdevs": 3, 00:15:19.516 "num_base_bdevs_discovered": 1, 00:15:19.516 "num_base_bdevs_operational": 3, 00:15:19.516 "base_bdevs_list": [ 00:15:19.516 { 00:15:19.516 "name": "BaseBdev1", 00:15:19.516 "uuid": "5ce7948f-20c7-477f-8feb-d8e668c0497b", 00:15:19.516 "is_configured": true, 00:15:19.516 "data_offset": 0, 00:15:19.516 "data_size": 65536 00:15:19.516 }, 00:15:19.516 { 00:15:19.516 "name": "BaseBdev2", 00:15:19.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.516 "is_configured": false, 00:15:19.516 "data_offset": 0, 00:15:19.516 "data_size": 0 00:15:19.516 }, 00:15:19.516 { 00:15:19.516 "name": "BaseBdev3", 00:15:19.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.516 "is_configured": false, 00:15:19.516 "data_offset": 0, 00:15:19.516 "data_size": 0 00:15:19.516 } 00:15:19.516 ] 00:15:19.516 }' 00:15:19.516 01:35:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.516 01:35:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.775 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.775 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.775 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.775 [2024-11-17 01:35:28.152550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.775 [2024-11-17 01:35:28.152596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:19.775 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.775 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:19.775 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.775 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.775 [2024-11-17 01:35:28.164575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.775 [2024-11-17 01:35:28.166366] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.775 [2024-11-17 01:35:28.166405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.776 [2024-11-17 01:35:28.166415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.776 [2024-11-17 01:35:28.166423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.776 "name": "Existed_Raid", 00:15:19.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.776 "strip_size_kb": 64, 00:15:19.776 "state": "configuring", 00:15:19.776 "raid_level": "raid5f", 00:15:19.776 "superblock": false, 00:15:19.776 "num_base_bdevs": 3, 00:15:19.776 "num_base_bdevs_discovered": 1, 00:15:19.776 "num_base_bdevs_operational": 3, 00:15:19.776 "base_bdevs_list": [ 00:15:19.776 { 00:15:19.776 "name": "BaseBdev1", 00:15:19.776 "uuid": "5ce7948f-20c7-477f-8feb-d8e668c0497b", 00:15:19.776 "is_configured": true, 00:15:19.776 "data_offset": 0, 00:15:19.776 "data_size": 65536 00:15:19.776 }, 00:15:19.776 { 00:15:19.776 "name": "BaseBdev2", 00:15:19.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.776 "is_configured": false, 00:15:19.776 "data_offset": 0, 00:15:19.776 "data_size": 0 00:15:19.776 }, 00:15:19.776 { 00:15:19.776 "name": "BaseBdev3", 00:15:19.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.776 "is_configured": false, 00:15:19.776 "data_offset": 0, 00:15:19.776 "data_size": 0 00:15:19.776 } 00:15:19.776 ] 00:15:19.776 }' 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.776 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.346 [2024-11-17 01:35:28.662900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.346 BaseBdev2 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.346 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.346 [ 00:15:20.346 { 00:15:20.346 "name": "BaseBdev2", 00:15:20.346 "aliases": [ 00:15:20.346 "ec891505-7eb1-4a8c-bdcd-8d483e5b9825" 00:15:20.346 ], 00:15:20.346 "product_name": "Malloc disk", 00:15:20.346 "block_size": 512, 00:15:20.346 "num_blocks": 65536, 00:15:20.346 "uuid": "ec891505-7eb1-4a8c-bdcd-8d483e5b9825", 00:15:20.346 "assigned_rate_limits": { 00:15:20.346 "rw_ios_per_sec": 0, 00:15:20.346 "rw_mbytes_per_sec": 0, 00:15:20.346 "r_mbytes_per_sec": 0, 00:15:20.346 "w_mbytes_per_sec": 0 00:15:20.346 }, 00:15:20.346 "claimed": true, 00:15:20.346 "claim_type": "exclusive_write", 00:15:20.346 "zoned": false, 00:15:20.347 "supported_io_types": { 00:15:20.347 "read": true, 00:15:20.347 "write": true, 00:15:20.347 "unmap": true, 00:15:20.347 "flush": true, 00:15:20.347 "reset": true, 00:15:20.347 "nvme_admin": false, 00:15:20.347 "nvme_io": false, 00:15:20.347 "nvme_io_md": false, 00:15:20.347 "write_zeroes": true, 00:15:20.347 "zcopy": true, 00:15:20.347 "get_zone_info": false, 00:15:20.347 "zone_management": false, 00:15:20.347 "zone_append": false, 00:15:20.347 "compare": false, 00:15:20.347 "compare_and_write": false, 00:15:20.347 "abort": true, 00:15:20.347 "seek_hole": false, 00:15:20.347 "seek_data": false, 00:15:20.347 "copy": true, 00:15:20.347 "nvme_iov_md": false 00:15:20.347 }, 00:15:20.347 "memory_domains": [ 00:15:20.347 { 00:15:20.347 "dma_device_id": "system", 00:15:20.347 "dma_device_type": 1 00:15:20.347 }, 00:15:20.347 { 00:15:20.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.347 "dma_device_type": 2 00:15:20.347 } 00:15:20.347 ], 00:15:20.347 "driver_specific": {} 00:15:20.347 } 00:15:20.347 ] 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.347 "name": "Existed_Raid", 00:15:20.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.347 "strip_size_kb": 64, 00:15:20.347 "state": "configuring", 00:15:20.347 "raid_level": "raid5f", 00:15:20.347 "superblock": false, 00:15:20.347 "num_base_bdevs": 3, 00:15:20.347 "num_base_bdevs_discovered": 2, 00:15:20.347 "num_base_bdevs_operational": 3, 00:15:20.347 "base_bdevs_list": [ 00:15:20.347 { 00:15:20.347 "name": "BaseBdev1", 00:15:20.347 "uuid": "5ce7948f-20c7-477f-8feb-d8e668c0497b", 00:15:20.347 "is_configured": true, 00:15:20.347 "data_offset": 0, 00:15:20.347 "data_size": 65536 00:15:20.347 }, 00:15:20.347 { 00:15:20.347 "name": "BaseBdev2", 00:15:20.347 "uuid": "ec891505-7eb1-4a8c-bdcd-8d483e5b9825", 00:15:20.347 "is_configured": true, 00:15:20.347 "data_offset": 0, 00:15:20.347 "data_size": 65536 00:15:20.347 }, 00:15:20.347 { 00:15:20.347 "name": "BaseBdev3", 00:15:20.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.347 "is_configured": false, 00:15:20.347 "data_offset": 0, 00:15:20.347 "data_size": 0 00:15:20.347 } 00:15:20.347 ] 00:15:20.347 }' 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.347 01:35:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.917 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:20.917 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.917 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.917 [2024-11-17 01:35:29.188329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.918 [2024-11-17 01:35:29.188425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:20.918 [2024-11-17 01:35:29.188440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:20.918 [2024-11-17 01:35:29.188704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:20.918 [2024-11-17 01:35:29.194041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:20.918 [2024-11-17 01:35:29.194065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:20.918 [2024-11-17 01:35:29.194387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.918 BaseBdev3 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.918 [ 00:15:20.918 { 00:15:20.918 "name": "BaseBdev3", 00:15:20.918 "aliases": [ 00:15:20.918 "c742b3de-2936-432d-8d7f-265040cf7fd2" 00:15:20.918 ], 00:15:20.918 "product_name": "Malloc disk", 00:15:20.918 "block_size": 512, 00:15:20.918 "num_blocks": 65536, 00:15:20.918 "uuid": "c742b3de-2936-432d-8d7f-265040cf7fd2", 00:15:20.918 "assigned_rate_limits": { 00:15:20.918 "rw_ios_per_sec": 0, 00:15:20.918 "rw_mbytes_per_sec": 0, 00:15:20.918 "r_mbytes_per_sec": 0, 00:15:20.918 "w_mbytes_per_sec": 0 00:15:20.918 }, 00:15:20.918 "claimed": true, 00:15:20.918 "claim_type": "exclusive_write", 00:15:20.918 "zoned": false, 00:15:20.918 "supported_io_types": { 00:15:20.918 "read": true, 00:15:20.918 "write": true, 00:15:20.918 "unmap": true, 00:15:20.918 "flush": true, 00:15:20.918 "reset": true, 00:15:20.918 "nvme_admin": false, 00:15:20.918 "nvme_io": false, 00:15:20.918 "nvme_io_md": false, 00:15:20.918 "write_zeroes": true, 00:15:20.918 "zcopy": true, 00:15:20.918 "get_zone_info": false, 00:15:20.918 "zone_management": false, 00:15:20.918 "zone_append": false, 00:15:20.918 "compare": false, 00:15:20.918 "compare_and_write": false, 00:15:20.918 "abort": true, 00:15:20.918 "seek_hole": false, 00:15:20.918 "seek_data": false, 00:15:20.918 "copy": true, 00:15:20.918 "nvme_iov_md": false 00:15:20.918 }, 00:15:20.918 "memory_domains": [ 00:15:20.918 { 00:15:20.918 "dma_device_id": "system", 00:15:20.918 "dma_device_type": 1 00:15:20.918 }, 00:15:20.918 { 00:15:20.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.918 "dma_device_type": 2 00:15:20.918 } 00:15:20.918 ], 00:15:20.918 "driver_specific": {} 00:15:20.918 } 00:15:20.918 ] 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.918 "name": "Existed_Raid", 00:15:20.918 "uuid": "e5eeabd1-ab1b-4c47-a323-4404459014ef", 00:15:20.918 "strip_size_kb": 64, 00:15:20.918 "state": "online", 00:15:20.918 "raid_level": "raid5f", 00:15:20.918 "superblock": false, 00:15:20.918 "num_base_bdevs": 3, 00:15:20.918 "num_base_bdevs_discovered": 3, 00:15:20.918 "num_base_bdevs_operational": 3, 00:15:20.918 "base_bdevs_list": [ 00:15:20.918 { 00:15:20.918 "name": "BaseBdev1", 00:15:20.918 "uuid": "5ce7948f-20c7-477f-8feb-d8e668c0497b", 00:15:20.918 "is_configured": true, 00:15:20.918 "data_offset": 0, 00:15:20.918 "data_size": 65536 00:15:20.918 }, 00:15:20.918 { 00:15:20.918 "name": "BaseBdev2", 00:15:20.918 "uuid": "ec891505-7eb1-4a8c-bdcd-8d483e5b9825", 00:15:20.918 "is_configured": true, 00:15:20.918 "data_offset": 0, 00:15:20.918 "data_size": 65536 00:15:20.918 }, 00:15:20.918 { 00:15:20.918 "name": "BaseBdev3", 00:15:20.918 "uuid": "c742b3de-2936-432d-8d7f-265040cf7fd2", 00:15:20.918 "is_configured": true, 00:15:20.918 "data_offset": 0, 00:15:20.918 "data_size": 65536 00:15:20.918 } 00:15:20.918 ] 00:15:20.918 }' 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.918 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:21.489 [2024-11-17 01:35:29.687862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.489 "name": "Existed_Raid", 00:15:21.489 "aliases": [ 00:15:21.489 "e5eeabd1-ab1b-4c47-a323-4404459014ef" 00:15:21.489 ], 00:15:21.489 "product_name": "Raid Volume", 00:15:21.489 "block_size": 512, 00:15:21.489 "num_blocks": 131072, 00:15:21.489 "uuid": "e5eeabd1-ab1b-4c47-a323-4404459014ef", 00:15:21.489 "assigned_rate_limits": { 00:15:21.489 "rw_ios_per_sec": 0, 00:15:21.489 "rw_mbytes_per_sec": 0, 00:15:21.489 "r_mbytes_per_sec": 0, 00:15:21.489 "w_mbytes_per_sec": 0 00:15:21.489 }, 00:15:21.489 "claimed": false, 00:15:21.489 "zoned": false, 00:15:21.489 "supported_io_types": { 00:15:21.489 "read": true, 00:15:21.489 "write": true, 00:15:21.489 "unmap": false, 00:15:21.489 "flush": false, 00:15:21.489 "reset": true, 00:15:21.489 "nvme_admin": false, 00:15:21.489 "nvme_io": false, 00:15:21.489 "nvme_io_md": false, 00:15:21.489 "write_zeroes": true, 00:15:21.489 "zcopy": false, 00:15:21.489 "get_zone_info": false, 00:15:21.489 "zone_management": false, 00:15:21.489 "zone_append": false, 00:15:21.489 "compare": false, 00:15:21.489 "compare_and_write": false, 00:15:21.489 "abort": false, 00:15:21.489 "seek_hole": false, 00:15:21.489 "seek_data": false, 00:15:21.489 "copy": false, 00:15:21.489 "nvme_iov_md": false 00:15:21.489 }, 00:15:21.489 "driver_specific": { 00:15:21.489 "raid": { 00:15:21.489 "uuid": "e5eeabd1-ab1b-4c47-a323-4404459014ef", 00:15:21.489 "strip_size_kb": 64, 00:15:21.489 "state": "online", 00:15:21.489 "raid_level": "raid5f", 00:15:21.489 "superblock": false, 00:15:21.489 "num_base_bdevs": 3, 00:15:21.489 "num_base_bdevs_discovered": 3, 00:15:21.489 "num_base_bdevs_operational": 3, 00:15:21.489 "base_bdevs_list": [ 00:15:21.489 { 00:15:21.489 "name": "BaseBdev1", 00:15:21.489 "uuid": "5ce7948f-20c7-477f-8feb-d8e668c0497b", 00:15:21.489 "is_configured": true, 00:15:21.489 "data_offset": 0, 00:15:21.489 "data_size": 65536 00:15:21.489 }, 00:15:21.489 { 00:15:21.489 "name": "BaseBdev2", 00:15:21.489 "uuid": "ec891505-7eb1-4a8c-bdcd-8d483e5b9825", 00:15:21.489 "is_configured": true, 00:15:21.489 "data_offset": 0, 00:15:21.489 "data_size": 65536 00:15:21.489 }, 00:15:21.489 { 00:15:21.489 "name": "BaseBdev3", 00:15:21.489 "uuid": "c742b3de-2936-432d-8d7f-265040cf7fd2", 00:15:21.489 "is_configured": true, 00:15:21.489 "data_offset": 0, 00:15:21.489 "data_size": 65536 00:15:21.489 } 00:15:21.489 ] 00:15:21.489 } 00:15:21.489 } 00:15:21.489 }' 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:21.489 BaseBdev2 00:15:21.489 BaseBdev3' 00:15:21.489 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.490 01:35:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.490 [2024-11-17 01:35:29.927303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.750 "name": "Existed_Raid", 00:15:21.750 "uuid": "e5eeabd1-ab1b-4c47-a323-4404459014ef", 00:15:21.750 "strip_size_kb": 64, 00:15:21.750 "state": "online", 00:15:21.750 "raid_level": "raid5f", 00:15:21.750 "superblock": false, 00:15:21.750 "num_base_bdevs": 3, 00:15:21.750 "num_base_bdevs_discovered": 2, 00:15:21.750 "num_base_bdevs_operational": 2, 00:15:21.750 "base_bdevs_list": [ 00:15:21.750 { 00:15:21.750 "name": null, 00:15:21.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.750 "is_configured": false, 00:15:21.750 "data_offset": 0, 00:15:21.750 "data_size": 65536 00:15:21.750 }, 00:15:21.750 { 00:15:21.750 "name": "BaseBdev2", 00:15:21.750 "uuid": "ec891505-7eb1-4a8c-bdcd-8d483e5b9825", 00:15:21.750 "is_configured": true, 00:15:21.750 "data_offset": 0, 00:15:21.750 "data_size": 65536 00:15:21.750 }, 00:15:21.750 { 00:15:21.750 "name": "BaseBdev3", 00:15:21.750 "uuid": "c742b3de-2936-432d-8d7f-265040cf7fd2", 00:15:21.750 "is_configured": true, 00:15:21.750 "data_offset": 0, 00:15:21.750 "data_size": 65536 00:15:21.750 } 00:15:21.750 ] 00:15:21.750 }' 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.750 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.320 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.321 [2024-11-17 01:35:30.547273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.321 [2024-11-17 01:35:30.547374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.321 [2024-11-17 01:35:30.634889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.321 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.321 [2024-11-17 01:35:30.694843] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:22.321 [2024-11-17 01:35:30.694905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 BaseBdev2 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 [ 00:15:22.582 { 00:15:22.582 "name": "BaseBdev2", 00:15:22.582 "aliases": [ 00:15:22.582 "91422258-6693-4d3b-b678-ea79f3896a63" 00:15:22.582 ], 00:15:22.582 "product_name": "Malloc disk", 00:15:22.582 "block_size": 512, 00:15:22.582 "num_blocks": 65536, 00:15:22.582 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:22.582 "assigned_rate_limits": { 00:15:22.582 "rw_ios_per_sec": 0, 00:15:22.582 "rw_mbytes_per_sec": 0, 00:15:22.582 "r_mbytes_per_sec": 0, 00:15:22.582 "w_mbytes_per_sec": 0 00:15:22.582 }, 00:15:22.582 "claimed": false, 00:15:22.582 "zoned": false, 00:15:22.582 "supported_io_types": { 00:15:22.582 "read": true, 00:15:22.582 "write": true, 00:15:22.582 "unmap": true, 00:15:22.582 "flush": true, 00:15:22.582 "reset": true, 00:15:22.582 "nvme_admin": false, 00:15:22.582 "nvme_io": false, 00:15:22.582 "nvme_io_md": false, 00:15:22.582 "write_zeroes": true, 00:15:22.582 "zcopy": true, 00:15:22.582 "get_zone_info": false, 00:15:22.582 "zone_management": false, 00:15:22.582 "zone_append": false, 00:15:22.582 "compare": false, 00:15:22.582 "compare_and_write": false, 00:15:22.582 "abort": true, 00:15:22.582 "seek_hole": false, 00:15:22.582 "seek_data": false, 00:15:22.582 "copy": true, 00:15:22.582 "nvme_iov_md": false 00:15:22.582 }, 00:15:22.582 "memory_domains": [ 00:15:22.582 { 00:15:22.582 "dma_device_id": "system", 00:15:22.582 "dma_device_type": 1 00:15:22.582 }, 00:15:22.582 { 00:15:22.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.582 "dma_device_type": 2 00:15:22.582 } 00:15:22.582 ], 00:15:22.582 "driver_specific": {} 00:15:22.582 } 00:15:22.582 ] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 BaseBdev3 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.582 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.582 [ 00:15:22.582 { 00:15:22.582 "name": "BaseBdev3", 00:15:22.582 "aliases": [ 00:15:22.582 "dbe842f9-9c49-435e-adc4-30bb8a9db94f" 00:15:22.582 ], 00:15:22.582 "product_name": "Malloc disk", 00:15:22.582 "block_size": 512, 00:15:22.582 "num_blocks": 65536, 00:15:22.582 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:22.582 "assigned_rate_limits": { 00:15:22.582 "rw_ios_per_sec": 0, 00:15:22.582 "rw_mbytes_per_sec": 0, 00:15:22.582 "r_mbytes_per_sec": 0, 00:15:22.582 "w_mbytes_per_sec": 0 00:15:22.582 }, 00:15:22.582 "claimed": false, 00:15:22.582 "zoned": false, 00:15:22.582 "supported_io_types": { 00:15:22.582 "read": true, 00:15:22.582 "write": true, 00:15:22.582 "unmap": true, 00:15:22.582 "flush": true, 00:15:22.582 "reset": true, 00:15:22.582 "nvme_admin": false, 00:15:22.582 "nvme_io": false, 00:15:22.582 "nvme_io_md": false, 00:15:22.582 "write_zeroes": true, 00:15:22.582 "zcopy": true, 00:15:22.582 "get_zone_info": false, 00:15:22.582 "zone_management": false, 00:15:22.582 "zone_append": false, 00:15:22.582 "compare": false, 00:15:22.582 "compare_and_write": false, 00:15:22.582 "abort": true, 00:15:22.582 "seek_hole": false, 00:15:22.582 "seek_data": false, 00:15:22.582 "copy": true, 00:15:22.582 "nvme_iov_md": false 00:15:22.582 }, 00:15:22.582 "memory_domains": [ 00:15:22.582 { 00:15:22.582 "dma_device_id": "system", 00:15:22.582 "dma_device_type": 1 00:15:22.582 }, 00:15:22.582 { 00:15:22.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.582 "dma_device_type": 2 00:15:22.582 } 00:15:22.583 ], 00:15:22.583 "driver_specific": {} 00:15:22.583 } 00:15:22.583 ] 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.583 [2024-11-17 01:35:30.993741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.583 [2024-11-17 01:35:30.993850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.583 [2024-11-17 01:35:30.993921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.583 [2024-11-17 01:35:30.995738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.583 01:35:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.583 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.583 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.583 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.583 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.583 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.843 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.843 "name": "Existed_Raid", 00:15:22.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.843 "strip_size_kb": 64, 00:15:22.843 "state": "configuring", 00:15:22.843 "raid_level": "raid5f", 00:15:22.843 "superblock": false, 00:15:22.843 "num_base_bdevs": 3, 00:15:22.843 "num_base_bdevs_discovered": 2, 00:15:22.843 "num_base_bdevs_operational": 3, 00:15:22.843 "base_bdevs_list": [ 00:15:22.843 { 00:15:22.843 "name": "BaseBdev1", 00:15:22.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.843 "is_configured": false, 00:15:22.843 "data_offset": 0, 00:15:22.843 "data_size": 0 00:15:22.843 }, 00:15:22.843 { 00:15:22.843 "name": "BaseBdev2", 00:15:22.843 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:22.843 "is_configured": true, 00:15:22.843 "data_offset": 0, 00:15:22.843 "data_size": 65536 00:15:22.843 }, 00:15:22.843 { 00:15:22.843 "name": "BaseBdev3", 00:15:22.843 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:22.843 "is_configured": true, 00:15:22.843 "data_offset": 0, 00:15:22.843 "data_size": 65536 00:15:22.843 } 00:15:22.843 ] 00:15:22.843 }' 00:15:22.843 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.843 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.103 [2024-11-17 01:35:31.464910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.103 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.103 "name": "Existed_Raid", 00:15:23.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.103 "strip_size_kb": 64, 00:15:23.103 "state": "configuring", 00:15:23.103 "raid_level": "raid5f", 00:15:23.103 "superblock": false, 00:15:23.103 "num_base_bdevs": 3, 00:15:23.103 "num_base_bdevs_discovered": 1, 00:15:23.103 "num_base_bdevs_operational": 3, 00:15:23.103 "base_bdevs_list": [ 00:15:23.103 { 00:15:23.103 "name": "BaseBdev1", 00:15:23.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.103 "is_configured": false, 00:15:23.103 "data_offset": 0, 00:15:23.103 "data_size": 0 00:15:23.103 }, 00:15:23.103 { 00:15:23.104 "name": null, 00:15:23.104 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:23.104 "is_configured": false, 00:15:23.104 "data_offset": 0, 00:15:23.104 "data_size": 65536 00:15:23.104 }, 00:15:23.104 { 00:15:23.104 "name": "BaseBdev3", 00:15:23.104 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:23.104 "is_configured": true, 00:15:23.104 "data_offset": 0, 00:15:23.104 "data_size": 65536 00:15:23.104 } 00:15:23.104 ] 00:15:23.104 }' 00:15:23.104 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.104 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.674 [2024-11-17 01:35:31.980633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.674 BaseBdev1 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.674 01:35:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.674 [ 00:15:23.674 { 00:15:23.674 "name": "BaseBdev1", 00:15:23.674 "aliases": [ 00:15:23.674 "be63cc64-72bb-45e2-a444-68db056844a5" 00:15:23.674 ], 00:15:23.674 "product_name": "Malloc disk", 00:15:23.674 "block_size": 512, 00:15:23.674 "num_blocks": 65536, 00:15:23.674 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:23.674 "assigned_rate_limits": { 00:15:23.674 "rw_ios_per_sec": 0, 00:15:23.674 "rw_mbytes_per_sec": 0, 00:15:23.674 "r_mbytes_per_sec": 0, 00:15:23.674 "w_mbytes_per_sec": 0 00:15:23.674 }, 00:15:23.674 "claimed": true, 00:15:23.674 "claim_type": "exclusive_write", 00:15:23.674 "zoned": false, 00:15:23.674 "supported_io_types": { 00:15:23.674 "read": true, 00:15:23.674 "write": true, 00:15:23.674 "unmap": true, 00:15:23.674 "flush": true, 00:15:23.674 "reset": true, 00:15:23.674 "nvme_admin": false, 00:15:23.674 "nvme_io": false, 00:15:23.674 "nvme_io_md": false, 00:15:23.674 "write_zeroes": true, 00:15:23.674 "zcopy": true, 00:15:23.674 "get_zone_info": false, 00:15:23.674 "zone_management": false, 00:15:23.674 "zone_append": false, 00:15:23.674 "compare": false, 00:15:23.674 "compare_and_write": false, 00:15:23.674 "abort": true, 00:15:23.674 "seek_hole": false, 00:15:23.674 "seek_data": false, 00:15:23.674 "copy": true, 00:15:23.674 "nvme_iov_md": false 00:15:23.674 }, 00:15:23.674 "memory_domains": [ 00:15:23.674 { 00:15:23.674 "dma_device_id": "system", 00:15:23.674 "dma_device_type": 1 00:15:23.674 }, 00:15:23.674 { 00:15:23.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.674 "dma_device_type": 2 00:15:23.674 } 00:15:23.674 ], 00:15:23.674 "driver_specific": {} 00:15:23.674 } 00:15:23.674 ] 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.674 "name": "Existed_Raid", 00:15:23.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.674 "strip_size_kb": 64, 00:15:23.674 "state": "configuring", 00:15:23.674 "raid_level": "raid5f", 00:15:23.674 "superblock": false, 00:15:23.674 "num_base_bdevs": 3, 00:15:23.674 "num_base_bdevs_discovered": 2, 00:15:23.674 "num_base_bdevs_operational": 3, 00:15:23.674 "base_bdevs_list": [ 00:15:23.674 { 00:15:23.674 "name": "BaseBdev1", 00:15:23.674 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:23.674 "is_configured": true, 00:15:23.674 "data_offset": 0, 00:15:23.674 "data_size": 65536 00:15:23.674 }, 00:15:23.674 { 00:15:23.674 "name": null, 00:15:23.674 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:23.674 "is_configured": false, 00:15:23.674 "data_offset": 0, 00:15:23.674 "data_size": 65536 00:15:23.674 }, 00:15:23.674 { 00:15:23.674 "name": "BaseBdev3", 00:15:23.674 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:23.674 "is_configured": true, 00:15:23.674 "data_offset": 0, 00:15:23.674 "data_size": 65536 00:15:23.674 } 00:15:23.674 ] 00:15:23.674 }' 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.674 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.253 [2024-11-17 01:35:32.527748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.253 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.253 "name": "Existed_Raid", 00:15:24.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.253 "strip_size_kb": 64, 00:15:24.253 "state": "configuring", 00:15:24.253 "raid_level": "raid5f", 00:15:24.253 "superblock": false, 00:15:24.253 "num_base_bdevs": 3, 00:15:24.253 "num_base_bdevs_discovered": 1, 00:15:24.253 "num_base_bdevs_operational": 3, 00:15:24.253 "base_bdevs_list": [ 00:15:24.253 { 00:15:24.253 "name": "BaseBdev1", 00:15:24.253 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:24.253 "is_configured": true, 00:15:24.253 "data_offset": 0, 00:15:24.253 "data_size": 65536 00:15:24.253 }, 00:15:24.253 { 00:15:24.253 "name": null, 00:15:24.253 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:24.253 "is_configured": false, 00:15:24.253 "data_offset": 0, 00:15:24.253 "data_size": 65536 00:15:24.254 }, 00:15:24.254 { 00:15:24.254 "name": null, 00:15:24.254 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:24.254 "is_configured": false, 00:15:24.254 "data_offset": 0, 00:15:24.254 "data_size": 65536 00:15:24.254 } 00:15:24.254 ] 00:15:24.254 }' 00:15:24.254 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.254 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.527 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.527 01:35:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:24.527 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.527 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.787 01:35:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.787 [2024-11-17 01:35:33.018986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.787 "name": "Existed_Raid", 00:15:24.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.787 "strip_size_kb": 64, 00:15:24.787 "state": "configuring", 00:15:24.787 "raid_level": "raid5f", 00:15:24.787 "superblock": false, 00:15:24.787 "num_base_bdevs": 3, 00:15:24.787 "num_base_bdevs_discovered": 2, 00:15:24.787 "num_base_bdevs_operational": 3, 00:15:24.787 "base_bdevs_list": [ 00:15:24.787 { 00:15:24.787 "name": "BaseBdev1", 00:15:24.787 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:24.787 "is_configured": true, 00:15:24.787 "data_offset": 0, 00:15:24.787 "data_size": 65536 00:15:24.787 }, 00:15:24.787 { 00:15:24.787 "name": null, 00:15:24.787 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:24.787 "is_configured": false, 00:15:24.787 "data_offset": 0, 00:15:24.787 "data_size": 65536 00:15:24.787 }, 00:15:24.787 { 00:15:24.787 "name": "BaseBdev3", 00:15:24.787 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:24.787 "is_configured": true, 00:15:24.787 "data_offset": 0, 00:15:24.787 "data_size": 65536 00:15:24.787 } 00:15:24.787 ] 00:15:24.787 }' 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.787 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.046 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.046 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.046 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.046 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:25.046 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.306 [2024-11-17 01:35:33.538160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.306 "name": "Existed_Raid", 00:15:25.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.306 "strip_size_kb": 64, 00:15:25.306 "state": "configuring", 00:15:25.306 "raid_level": "raid5f", 00:15:25.306 "superblock": false, 00:15:25.306 "num_base_bdevs": 3, 00:15:25.306 "num_base_bdevs_discovered": 1, 00:15:25.306 "num_base_bdevs_operational": 3, 00:15:25.306 "base_bdevs_list": [ 00:15:25.306 { 00:15:25.306 "name": null, 00:15:25.306 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:25.306 "is_configured": false, 00:15:25.306 "data_offset": 0, 00:15:25.306 "data_size": 65536 00:15:25.306 }, 00:15:25.306 { 00:15:25.306 "name": null, 00:15:25.306 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:25.306 "is_configured": false, 00:15:25.306 "data_offset": 0, 00:15:25.306 "data_size": 65536 00:15:25.306 }, 00:15:25.306 { 00:15:25.306 "name": "BaseBdev3", 00:15:25.306 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:25.306 "is_configured": true, 00:15:25.306 "data_offset": 0, 00:15:25.306 "data_size": 65536 00:15:25.306 } 00:15:25.306 ] 00:15:25.306 }' 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.306 01:35:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.876 [2024-11-17 01:35:34.113862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.876 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.876 "name": "Existed_Raid", 00:15:25.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.876 "strip_size_kb": 64, 00:15:25.876 "state": "configuring", 00:15:25.876 "raid_level": "raid5f", 00:15:25.876 "superblock": false, 00:15:25.876 "num_base_bdevs": 3, 00:15:25.876 "num_base_bdevs_discovered": 2, 00:15:25.876 "num_base_bdevs_operational": 3, 00:15:25.876 "base_bdevs_list": [ 00:15:25.876 { 00:15:25.876 "name": null, 00:15:25.877 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:25.877 "is_configured": false, 00:15:25.877 "data_offset": 0, 00:15:25.877 "data_size": 65536 00:15:25.877 }, 00:15:25.877 { 00:15:25.877 "name": "BaseBdev2", 00:15:25.877 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:25.877 "is_configured": true, 00:15:25.877 "data_offset": 0, 00:15:25.877 "data_size": 65536 00:15:25.877 }, 00:15:25.877 { 00:15:25.877 "name": "BaseBdev3", 00:15:25.877 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:25.877 "is_configured": true, 00:15:25.877 "data_offset": 0, 00:15:25.877 "data_size": 65536 00:15:25.877 } 00:15:25.877 ] 00:15:25.877 }' 00:15:25.877 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.877 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.136 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.136 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.136 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.136 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:26.136 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be63cc64-72bb-45e2-a444-68db056844a5 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.397 [2024-11-17 01:35:34.709898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:26.397 [2024-11-17 01:35:34.710010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:26.397 [2024-11-17 01:35:34.710046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:26.397 [2024-11-17 01:35:34.710328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:26.397 [2024-11-17 01:35:34.715748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:26.397 [2024-11-17 01:35:34.715817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:26.397 [2024-11-17 01:35:34.716103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.397 NewBaseBdev 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.397 [ 00:15:26.397 { 00:15:26.397 "name": "NewBaseBdev", 00:15:26.397 "aliases": [ 00:15:26.397 "be63cc64-72bb-45e2-a444-68db056844a5" 00:15:26.397 ], 00:15:26.397 "product_name": "Malloc disk", 00:15:26.397 "block_size": 512, 00:15:26.397 "num_blocks": 65536, 00:15:26.397 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:26.397 "assigned_rate_limits": { 00:15:26.397 "rw_ios_per_sec": 0, 00:15:26.397 "rw_mbytes_per_sec": 0, 00:15:26.397 "r_mbytes_per_sec": 0, 00:15:26.397 "w_mbytes_per_sec": 0 00:15:26.397 }, 00:15:26.397 "claimed": true, 00:15:26.397 "claim_type": "exclusive_write", 00:15:26.397 "zoned": false, 00:15:26.397 "supported_io_types": { 00:15:26.397 "read": true, 00:15:26.397 "write": true, 00:15:26.397 "unmap": true, 00:15:26.397 "flush": true, 00:15:26.397 "reset": true, 00:15:26.397 "nvme_admin": false, 00:15:26.397 "nvme_io": false, 00:15:26.397 "nvme_io_md": false, 00:15:26.397 "write_zeroes": true, 00:15:26.397 "zcopy": true, 00:15:26.397 "get_zone_info": false, 00:15:26.397 "zone_management": false, 00:15:26.397 "zone_append": false, 00:15:26.397 "compare": false, 00:15:26.397 "compare_and_write": false, 00:15:26.397 "abort": true, 00:15:26.397 "seek_hole": false, 00:15:26.397 "seek_data": false, 00:15:26.397 "copy": true, 00:15:26.397 "nvme_iov_md": false 00:15:26.397 }, 00:15:26.397 "memory_domains": [ 00:15:26.397 { 00:15:26.397 "dma_device_id": "system", 00:15:26.397 "dma_device_type": 1 00:15:26.397 }, 00:15:26.397 { 00:15:26.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.397 "dma_device_type": 2 00:15:26.397 } 00:15:26.397 ], 00:15:26.397 "driver_specific": {} 00:15:26.397 } 00:15:26.397 ] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.397 "name": "Existed_Raid", 00:15:26.397 "uuid": "230bae5a-6759-4a02-922e-d9fcfc4cbfef", 00:15:26.397 "strip_size_kb": 64, 00:15:26.397 "state": "online", 00:15:26.397 "raid_level": "raid5f", 00:15:26.397 "superblock": false, 00:15:26.397 "num_base_bdevs": 3, 00:15:26.397 "num_base_bdevs_discovered": 3, 00:15:26.397 "num_base_bdevs_operational": 3, 00:15:26.397 "base_bdevs_list": [ 00:15:26.397 { 00:15:26.397 "name": "NewBaseBdev", 00:15:26.397 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:26.397 "is_configured": true, 00:15:26.397 "data_offset": 0, 00:15:26.397 "data_size": 65536 00:15:26.397 }, 00:15:26.397 { 00:15:26.397 "name": "BaseBdev2", 00:15:26.397 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:26.397 "is_configured": true, 00:15:26.397 "data_offset": 0, 00:15:26.397 "data_size": 65536 00:15:26.397 }, 00:15:26.397 { 00:15:26.397 "name": "BaseBdev3", 00:15:26.397 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:26.397 "is_configured": true, 00:15:26.397 "data_offset": 0, 00:15:26.397 "data_size": 65536 00:15:26.397 } 00:15:26.397 ] 00:15:26.397 }' 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.397 01:35:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.967 [2024-11-17 01:35:35.217852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.967 "name": "Existed_Raid", 00:15:26.967 "aliases": [ 00:15:26.967 "230bae5a-6759-4a02-922e-d9fcfc4cbfef" 00:15:26.967 ], 00:15:26.967 "product_name": "Raid Volume", 00:15:26.967 "block_size": 512, 00:15:26.967 "num_blocks": 131072, 00:15:26.967 "uuid": "230bae5a-6759-4a02-922e-d9fcfc4cbfef", 00:15:26.967 "assigned_rate_limits": { 00:15:26.967 "rw_ios_per_sec": 0, 00:15:26.967 "rw_mbytes_per_sec": 0, 00:15:26.967 "r_mbytes_per_sec": 0, 00:15:26.967 "w_mbytes_per_sec": 0 00:15:26.967 }, 00:15:26.967 "claimed": false, 00:15:26.967 "zoned": false, 00:15:26.967 "supported_io_types": { 00:15:26.967 "read": true, 00:15:26.967 "write": true, 00:15:26.967 "unmap": false, 00:15:26.967 "flush": false, 00:15:26.967 "reset": true, 00:15:26.967 "nvme_admin": false, 00:15:26.967 "nvme_io": false, 00:15:26.967 "nvme_io_md": false, 00:15:26.967 "write_zeroes": true, 00:15:26.967 "zcopy": false, 00:15:26.967 "get_zone_info": false, 00:15:26.967 "zone_management": false, 00:15:26.967 "zone_append": false, 00:15:26.967 "compare": false, 00:15:26.967 "compare_and_write": false, 00:15:26.967 "abort": false, 00:15:26.967 "seek_hole": false, 00:15:26.967 "seek_data": false, 00:15:26.967 "copy": false, 00:15:26.967 "nvme_iov_md": false 00:15:26.967 }, 00:15:26.967 "driver_specific": { 00:15:26.967 "raid": { 00:15:26.967 "uuid": "230bae5a-6759-4a02-922e-d9fcfc4cbfef", 00:15:26.967 "strip_size_kb": 64, 00:15:26.967 "state": "online", 00:15:26.967 "raid_level": "raid5f", 00:15:26.967 "superblock": false, 00:15:26.967 "num_base_bdevs": 3, 00:15:26.967 "num_base_bdevs_discovered": 3, 00:15:26.967 "num_base_bdevs_operational": 3, 00:15:26.967 "base_bdevs_list": [ 00:15:26.967 { 00:15:26.967 "name": "NewBaseBdev", 00:15:26.967 "uuid": "be63cc64-72bb-45e2-a444-68db056844a5", 00:15:26.967 "is_configured": true, 00:15:26.967 "data_offset": 0, 00:15:26.967 "data_size": 65536 00:15:26.967 }, 00:15:26.967 { 00:15:26.967 "name": "BaseBdev2", 00:15:26.967 "uuid": "91422258-6693-4d3b-b678-ea79f3896a63", 00:15:26.967 "is_configured": true, 00:15:26.967 "data_offset": 0, 00:15:26.967 "data_size": 65536 00:15:26.967 }, 00:15:26.967 { 00:15:26.967 "name": "BaseBdev3", 00:15:26.967 "uuid": "dbe842f9-9c49-435e-adc4-30bb8a9db94f", 00:15:26.967 "is_configured": true, 00:15:26.967 "data_offset": 0, 00:15:26.967 "data_size": 65536 00:15:26.967 } 00:15:26.967 ] 00:15:26.967 } 00:15:26.967 } 00:15:26.967 }' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:26.967 BaseBdev2 00:15:26.967 BaseBdev3' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.967 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.227 [2024-11-17 01:35:35.501147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.227 [2024-11-17 01:35:35.501220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.227 [2024-11-17 01:35:35.501326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.227 [2024-11-17 01:35:35.501636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.227 [2024-11-17 01:35:35.501705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79628 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79628 ']' 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79628 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79628 00:15:27.227 killing process with pid 79628 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79628' 00:15:27.227 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79628 00:15:27.227 [2024-11-17 01:35:35.548337] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.228 01:35:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79628 00:15:27.487 [2024-11-17 01:35:35.845162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.870 01:35:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:28.870 00:15:28.870 real 0m10.670s 00:15:28.870 user 0m16.971s 00:15:28.870 sys 0m1.983s 00:15:28.870 01:35:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.870 ************************************ 00:15:28.870 END TEST raid5f_state_function_test 00:15:28.870 ************************************ 00:15:28.870 01:35:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.870 01:35:37 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:28.870 01:35:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:28.870 01:35:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.870 01:35:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.870 ************************************ 00:15:28.870 START TEST raid5f_state_function_test_sb 00:15:28.870 ************************************ 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:28.870 Process raid pid: 80249 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80249 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80249' 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80249 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80249 ']' 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.870 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.870 [2024-11-17 01:35:37.129239] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:28.870 [2024-11-17 01:35:37.129445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.870 [2024-11-17 01:35:37.304428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.130 [2024-11-17 01:35:37.420629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.390 [2024-11-17 01:35:37.627645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.390 [2024-11-17 01:35:37.627732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.650 [2024-11-17 01:35:37.955265] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.650 [2024-11-17 01:35:37.955364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.650 [2024-11-17 01:35:37.955407] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.650 [2024-11-17 01:35:37.955437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.650 [2024-11-17 01:35:37.955499] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.650 [2024-11-17 01:35:37.955540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.650 01:35:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.650 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.650 "name": "Existed_Raid", 00:15:29.650 "uuid": "62610333-879e-4268-803b-210ca07b273e", 00:15:29.650 "strip_size_kb": 64, 00:15:29.650 "state": "configuring", 00:15:29.650 "raid_level": "raid5f", 00:15:29.650 "superblock": true, 00:15:29.650 "num_base_bdevs": 3, 00:15:29.650 "num_base_bdevs_discovered": 0, 00:15:29.650 "num_base_bdevs_operational": 3, 00:15:29.650 "base_bdevs_list": [ 00:15:29.650 { 00:15:29.650 "name": "BaseBdev1", 00:15:29.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.650 "is_configured": false, 00:15:29.650 "data_offset": 0, 00:15:29.650 "data_size": 0 00:15:29.650 }, 00:15:29.650 { 00:15:29.650 "name": "BaseBdev2", 00:15:29.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.650 "is_configured": false, 00:15:29.650 "data_offset": 0, 00:15:29.650 "data_size": 0 00:15:29.650 }, 00:15:29.650 { 00:15:29.650 "name": "BaseBdev3", 00:15:29.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.650 "is_configured": false, 00:15:29.650 "data_offset": 0, 00:15:29.650 "data_size": 0 00:15:29.650 } 00:15:29.650 ] 00:15:29.650 }' 00:15:29.650 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.650 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.220 [2024-11-17 01:35:38.390470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.220 [2024-11-17 01:35:38.390550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.220 [2024-11-17 01:35:38.402464] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.220 [2024-11-17 01:35:38.402551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.220 [2024-11-17 01:35:38.402594] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.220 [2024-11-17 01:35:38.402635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.220 [2024-11-17 01:35:38.402667] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.220 [2024-11-17 01:35:38.402706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.220 [2024-11-17 01:35:38.448357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.220 BaseBdev1 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.220 [ 00:15:30.220 { 00:15:30.220 "name": "BaseBdev1", 00:15:30.220 "aliases": [ 00:15:30.220 "689abdcd-428e-4b77-911e-2a89e70b6e7c" 00:15:30.220 ], 00:15:30.220 "product_name": "Malloc disk", 00:15:30.220 "block_size": 512, 00:15:30.220 "num_blocks": 65536, 00:15:30.220 "uuid": "689abdcd-428e-4b77-911e-2a89e70b6e7c", 00:15:30.220 "assigned_rate_limits": { 00:15:30.220 "rw_ios_per_sec": 0, 00:15:30.220 "rw_mbytes_per_sec": 0, 00:15:30.220 "r_mbytes_per_sec": 0, 00:15:30.220 "w_mbytes_per_sec": 0 00:15:30.220 }, 00:15:30.220 "claimed": true, 00:15:30.220 "claim_type": "exclusive_write", 00:15:30.220 "zoned": false, 00:15:30.220 "supported_io_types": { 00:15:30.220 "read": true, 00:15:30.220 "write": true, 00:15:30.220 "unmap": true, 00:15:30.220 "flush": true, 00:15:30.220 "reset": true, 00:15:30.220 "nvme_admin": false, 00:15:30.220 "nvme_io": false, 00:15:30.220 "nvme_io_md": false, 00:15:30.220 "write_zeroes": true, 00:15:30.220 "zcopy": true, 00:15:30.220 "get_zone_info": false, 00:15:30.220 "zone_management": false, 00:15:30.220 "zone_append": false, 00:15:30.220 "compare": false, 00:15:30.220 "compare_and_write": false, 00:15:30.220 "abort": true, 00:15:30.220 "seek_hole": false, 00:15:30.220 "seek_data": false, 00:15:30.220 "copy": true, 00:15:30.220 "nvme_iov_md": false 00:15:30.220 }, 00:15:30.220 "memory_domains": [ 00:15:30.220 { 00:15:30.220 "dma_device_id": "system", 00:15:30.220 "dma_device_type": 1 00:15:30.220 }, 00:15:30.220 { 00:15:30.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.220 "dma_device_type": 2 00:15:30.220 } 00:15:30.220 ], 00:15:30.220 "driver_specific": {} 00:15:30.220 } 00:15:30.220 ] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.220 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.220 "name": "Existed_Raid", 00:15:30.220 "uuid": "04fa5423-1dee-42c7-9bd6-79593cba6494", 00:15:30.220 "strip_size_kb": 64, 00:15:30.220 "state": "configuring", 00:15:30.220 "raid_level": "raid5f", 00:15:30.220 "superblock": true, 00:15:30.220 "num_base_bdevs": 3, 00:15:30.220 "num_base_bdevs_discovered": 1, 00:15:30.221 "num_base_bdevs_operational": 3, 00:15:30.221 "base_bdevs_list": [ 00:15:30.221 { 00:15:30.221 "name": "BaseBdev1", 00:15:30.221 "uuid": "689abdcd-428e-4b77-911e-2a89e70b6e7c", 00:15:30.221 "is_configured": true, 00:15:30.221 "data_offset": 2048, 00:15:30.221 "data_size": 63488 00:15:30.221 }, 00:15:30.221 { 00:15:30.221 "name": "BaseBdev2", 00:15:30.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.221 "is_configured": false, 00:15:30.221 "data_offset": 0, 00:15:30.221 "data_size": 0 00:15:30.221 }, 00:15:30.221 { 00:15:30.221 "name": "BaseBdev3", 00:15:30.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.221 "is_configured": false, 00:15:30.221 "data_offset": 0, 00:15:30.221 "data_size": 0 00:15:30.221 } 00:15:30.221 ] 00:15:30.221 }' 00:15:30.221 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.221 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 [2024-11-17 01:35:38.951534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.790 [2024-11-17 01:35:38.951635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 [2024-11-17 01:35:38.963562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.790 [2024-11-17 01:35:38.965387] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.790 [2024-11-17 01:35:38.965476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.790 [2024-11-17 01:35:38.965536] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.790 [2024-11-17 01:35:38.965583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.790 01:35:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.790 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.790 "name": "Existed_Raid", 00:15:30.790 "uuid": "01cebd0f-c67e-4024-a62a-6c4fb649c608", 00:15:30.790 "strip_size_kb": 64, 00:15:30.790 "state": "configuring", 00:15:30.790 "raid_level": "raid5f", 00:15:30.790 "superblock": true, 00:15:30.790 "num_base_bdevs": 3, 00:15:30.790 "num_base_bdevs_discovered": 1, 00:15:30.790 "num_base_bdevs_operational": 3, 00:15:30.790 "base_bdevs_list": [ 00:15:30.790 { 00:15:30.790 "name": "BaseBdev1", 00:15:30.790 "uuid": "689abdcd-428e-4b77-911e-2a89e70b6e7c", 00:15:30.790 "is_configured": true, 00:15:30.790 "data_offset": 2048, 00:15:30.790 "data_size": 63488 00:15:30.790 }, 00:15:30.790 { 00:15:30.790 "name": "BaseBdev2", 00:15:30.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.790 "is_configured": false, 00:15:30.790 "data_offset": 0, 00:15:30.791 "data_size": 0 00:15:30.791 }, 00:15:30.791 { 00:15:30.791 "name": "BaseBdev3", 00:15:30.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.791 "is_configured": false, 00:15:30.791 "data_offset": 0, 00:15:30.791 "data_size": 0 00:15:30.791 } 00:15:30.791 ] 00:15:30.791 }' 00:15:30.791 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.791 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.050 [2024-11-17 01:35:39.455170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.050 BaseBdev2 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.050 [ 00:15:31.050 { 00:15:31.050 "name": "BaseBdev2", 00:15:31.050 "aliases": [ 00:15:31.050 "0f837590-6fe6-4bed-b209-0b7c3b6a8a14" 00:15:31.050 ], 00:15:31.050 "product_name": "Malloc disk", 00:15:31.050 "block_size": 512, 00:15:31.050 "num_blocks": 65536, 00:15:31.050 "uuid": "0f837590-6fe6-4bed-b209-0b7c3b6a8a14", 00:15:31.050 "assigned_rate_limits": { 00:15:31.050 "rw_ios_per_sec": 0, 00:15:31.050 "rw_mbytes_per_sec": 0, 00:15:31.050 "r_mbytes_per_sec": 0, 00:15:31.050 "w_mbytes_per_sec": 0 00:15:31.050 }, 00:15:31.050 "claimed": true, 00:15:31.050 "claim_type": "exclusive_write", 00:15:31.050 "zoned": false, 00:15:31.050 "supported_io_types": { 00:15:31.050 "read": true, 00:15:31.050 "write": true, 00:15:31.050 "unmap": true, 00:15:31.050 "flush": true, 00:15:31.050 "reset": true, 00:15:31.050 "nvme_admin": false, 00:15:31.050 "nvme_io": false, 00:15:31.050 "nvme_io_md": false, 00:15:31.050 "write_zeroes": true, 00:15:31.050 "zcopy": true, 00:15:31.050 "get_zone_info": false, 00:15:31.050 "zone_management": false, 00:15:31.050 "zone_append": false, 00:15:31.050 "compare": false, 00:15:31.050 "compare_and_write": false, 00:15:31.050 "abort": true, 00:15:31.050 "seek_hole": false, 00:15:31.050 "seek_data": false, 00:15:31.050 "copy": true, 00:15:31.050 "nvme_iov_md": false 00:15:31.050 }, 00:15:31.050 "memory_domains": [ 00:15:31.050 { 00:15:31.050 "dma_device_id": "system", 00:15:31.050 "dma_device_type": 1 00:15:31.050 }, 00:15:31.050 { 00:15:31.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.050 "dma_device_type": 2 00:15:31.050 } 00:15:31.050 ], 00:15:31.050 "driver_specific": {} 00:15:31.050 } 00:15:31.050 ] 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.050 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.051 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.310 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.310 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.310 "name": "Existed_Raid", 00:15:31.310 "uuid": "01cebd0f-c67e-4024-a62a-6c4fb649c608", 00:15:31.310 "strip_size_kb": 64, 00:15:31.310 "state": "configuring", 00:15:31.310 "raid_level": "raid5f", 00:15:31.310 "superblock": true, 00:15:31.310 "num_base_bdevs": 3, 00:15:31.310 "num_base_bdevs_discovered": 2, 00:15:31.310 "num_base_bdevs_operational": 3, 00:15:31.310 "base_bdevs_list": [ 00:15:31.310 { 00:15:31.310 "name": "BaseBdev1", 00:15:31.310 "uuid": "689abdcd-428e-4b77-911e-2a89e70b6e7c", 00:15:31.310 "is_configured": true, 00:15:31.310 "data_offset": 2048, 00:15:31.310 "data_size": 63488 00:15:31.310 }, 00:15:31.310 { 00:15:31.310 "name": "BaseBdev2", 00:15:31.310 "uuid": "0f837590-6fe6-4bed-b209-0b7c3b6a8a14", 00:15:31.310 "is_configured": true, 00:15:31.310 "data_offset": 2048, 00:15:31.310 "data_size": 63488 00:15:31.310 }, 00:15:31.310 { 00:15:31.310 "name": "BaseBdev3", 00:15:31.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.310 "is_configured": false, 00:15:31.310 "data_offset": 0, 00:15:31.310 "data_size": 0 00:15:31.310 } 00:15:31.310 ] 00:15:31.310 }' 00:15:31.310 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.310 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.570 01:35:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.570 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.570 01:35:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.570 [2024-11-17 01:35:40.013640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.570 [2024-11-17 01:35:40.014020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:31.570 [2024-11-17 01:35:40.014087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:31.570 [2024-11-17 01:35:40.014394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:31.570 BaseBdev3 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.570 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.570 [2024-11-17 01:35:40.019919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:31.570 [2024-11-17 01:35:40.019981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:31.570 [2024-11-17 01:35:40.020210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.830 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.830 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:31.830 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.830 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.830 [ 00:15:31.830 { 00:15:31.830 "name": "BaseBdev3", 00:15:31.831 "aliases": [ 00:15:31.831 "a5b8dc2f-e145-49d0-b69e-419d3e8119e9" 00:15:31.831 ], 00:15:31.831 "product_name": "Malloc disk", 00:15:31.831 "block_size": 512, 00:15:31.831 "num_blocks": 65536, 00:15:31.831 "uuid": "a5b8dc2f-e145-49d0-b69e-419d3e8119e9", 00:15:31.831 "assigned_rate_limits": { 00:15:31.831 "rw_ios_per_sec": 0, 00:15:31.831 "rw_mbytes_per_sec": 0, 00:15:31.831 "r_mbytes_per_sec": 0, 00:15:31.831 "w_mbytes_per_sec": 0 00:15:31.831 }, 00:15:31.831 "claimed": true, 00:15:31.831 "claim_type": "exclusive_write", 00:15:31.831 "zoned": false, 00:15:31.831 "supported_io_types": { 00:15:31.831 "read": true, 00:15:31.831 "write": true, 00:15:31.831 "unmap": true, 00:15:31.831 "flush": true, 00:15:31.831 "reset": true, 00:15:31.831 "nvme_admin": false, 00:15:31.831 "nvme_io": false, 00:15:31.831 "nvme_io_md": false, 00:15:31.831 "write_zeroes": true, 00:15:31.831 "zcopy": true, 00:15:31.831 "get_zone_info": false, 00:15:31.831 "zone_management": false, 00:15:31.831 "zone_append": false, 00:15:31.831 "compare": false, 00:15:31.831 "compare_and_write": false, 00:15:31.831 "abort": true, 00:15:31.831 "seek_hole": false, 00:15:31.831 "seek_data": false, 00:15:31.831 "copy": true, 00:15:31.831 "nvme_iov_md": false 00:15:31.831 }, 00:15:31.831 "memory_domains": [ 00:15:31.831 { 00:15:31.831 "dma_device_id": "system", 00:15:31.831 "dma_device_type": 1 00:15:31.831 }, 00:15:31.831 { 00:15:31.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.831 "dma_device_type": 2 00:15:31.831 } 00:15:31.831 ], 00:15:31.831 "driver_specific": {} 00:15:31.831 } 00:15:31.831 ] 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.831 "name": "Existed_Raid", 00:15:31.831 "uuid": "01cebd0f-c67e-4024-a62a-6c4fb649c608", 00:15:31.831 "strip_size_kb": 64, 00:15:31.831 "state": "online", 00:15:31.831 "raid_level": "raid5f", 00:15:31.831 "superblock": true, 00:15:31.831 "num_base_bdevs": 3, 00:15:31.831 "num_base_bdevs_discovered": 3, 00:15:31.831 "num_base_bdevs_operational": 3, 00:15:31.831 "base_bdevs_list": [ 00:15:31.831 { 00:15:31.831 "name": "BaseBdev1", 00:15:31.831 "uuid": "689abdcd-428e-4b77-911e-2a89e70b6e7c", 00:15:31.831 "is_configured": true, 00:15:31.831 "data_offset": 2048, 00:15:31.831 "data_size": 63488 00:15:31.831 }, 00:15:31.831 { 00:15:31.831 "name": "BaseBdev2", 00:15:31.831 "uuid": "0f837590-6fe6-4bed-b209-0b7c3b6a8a14", 00:15:31.831 "is_configured": true, 00:15:31.831 "data_offset": 2048, 00:15:31.831 "data_size": 63488 00:15:31.831 }, 00:15:31.831 { 00:15:31.831 "name": "BaseBdev3", 00:15:31.831 "uuid": "a5b8dc2f-e145-49d0-b69e-419d3e8119e9", 00:15:31.831 "is_configured": true, 00:15:31.831 "data_offset": 2048, 00:15:31.831 "data_size": 63488 00:15:31.831 } 00:15:31.831 ] 00:15:31.831 }' 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.831 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:32.091 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.092 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.092 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.092 [2024-11-17 01:35:40.517357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.092 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.352 "name": "Existed_Raid", 00:15:32.352 "aliases": [ 00:15:32.352 "01cebd0f-c67e-4024-a62a-6c4fb649c608" 00:15:32.352 ], 00:15:32.352 "product_name": "Raid Volume", 00:15:32.352 "block_size": 512, 00:15:32.352 "num_blocks": 126976, 00:15:32.352 "uuid": "01cebd0f-c67e-4024-a62a-6c4fb649c608", 00:15:32.352 "assigned_rate_limits": { 00:15:32.352 "rw_ios_per_sec": 0, 00:15:32.352 "rw_mbytes_per_sec": 0, 00:15:32.352 "r_mbytes_per_sec": 0, 00:15:32.352 "w_mbytes_per_sec": 0 00:15:32.352 }, 00:15:32.352 "claimed": false, 00:15:32.352 "zoned": false, 00:15:32.352 "supported_io_types": { 00:15:32.352 "read": true, 00:15:32.352 "write": true, 00:15:32.352 "unmap": false, 00:15:32.352 "flush": false, 00:15:32.352 "reset": true, 00:15:32.352 "nvme_admin": false, 00:15:32.352 "nvme_io": false, 00:15:32.352 "nvme_io_md": false, 00:15:32.352 "write_zeroes": true, 00:15:32.352 "zcopy": false, 00:15:32.352 "get_zone_info": false, 00:15:32.352 "zone_management": false, 00:15:32.352 "zone_append": false, 00:15:32.352 "compare": false, 00:15:32.352 "compare_and_write": false, 00:15:32.352 "abort": false, 00:15:32.352 "seek_hole": false, 00:15:32.352 "seek_data": false, 00:15:32.352 "copy": false, 00:15:32.352 "nvme_iov_md": false 00:15:32.352 }, 00:15:32.352 "driver_specific": { 00:15:32.352 "raid": { 00:15:32.352 "uuid": "01cebd0f-c67e-4024-a62a-6c4fb649c608", 00:15:32.352 "strip_size_kb": 64, 00:15:32.352 "state": "online", 00:15:32.352 "raid_level": "raid5f", 00:15:32.352 "superblock": true, 00:15:32.352 "num_base_bdevs": 3, 00:15:32.352 "num_base_bdevs_discovered": 3, 00:15:32.352 "num_base_bdevs_operational": 3, 00:15:32.352 "base_bdevs_list": [ 00:15:32.352 { 00:15:32.352 "name": "BaseBdev1", 00:15:32.352 "uuid": "689abdcd-428e-4b77-911e-2a89e70b6e7c", 00:15:32.352 "is_configured": true, 00:15:32.352 "data_offset": 2048, 00:15:32.352 "data_size": 63488 00:15:32.352 }, 00:15:32.352 { 00:15:32.352 "name": "BaseBdev2", 00:15:32.352 "uuid": "0f837590-6fe6-4bed-b209-0b7c3b6a8a14", 00:15:32.352 "is_configured": true, 00:15:32.352 "data_offset": 2048, 00:15:32.352 "data_size": 63488 00:15:32.352 }, 00:15:32.352 { 00:15:32.352 "name": "BaseBdev3", 00:15:32.352 "uuid": "a5b8dc2f-e145-49d0-b69e-419d3e8119e9", 00:15:32.352 "is_configured": true, 00:15:32.352 "data_offset": 2048, 00:15:32.352 "data_size": 63488 00:15:32.352 } 00:15:32.352 ] 00:15:32.352 } 00:15:32.352 } 00:15:32.352 }' 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:32.352 BaseBdev2 00:15:32.352 BaseBdev3' 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.352 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.353 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.353 [2024-11-17 01:35:40.796737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.612 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.612 "name": "Existed_Raid", 00:15:32.612 "uuid": "01cebd0f-c67e-4024-a62a-6c4fb649c608", 00:15:32.612 "strip_size_kb": 64, 00:15:32.612 "state": "online", 00:15:32.612 "raid_level": "raid5f", 00:15:32.612 "superblock": true, 00:15:32.612 "num_base_bdevs": 3, 00:15:32.612 "num_base_bdevs_discovered": 2, 00:15:32.612 "num_base_bdevs_operational": 2, 00:15:32.612 "base_bdevs_list": [ 00:15:32.613 { 00:15:32.613 "name": null, 00:15:32.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.613 "is_configured": false, 00:15:32.613 "data_offset": 0, 00:15:32.613 "data_size": 63488 00:15:32.613 }, 00:15:32.613 { 00:15:32.613 "name": "BaseBdev2", 00:15:32.613 "uuid": "0f837590-6fe6-4bed-b209-0b7c3b6a8a14", 00:15:32.613 "is_configured": true, 00:15:32.613 "data_offset": 2048, 00:15:32.613 "data_size": 63488 00:15:32.613 }, 00:15:32.613 { 00:15:32.613 "name": "BaseBdev3", 00:15:32.613 "uuid": "a5b8dc2f-e145-49d0-b69e-419d3e8119e9", 00:15:32.613 "is_configured": true, 00:15:32.613 "data_offset": 2048, 00:15:32.613 "data_size": 63488 00:15:32.613 } 00:15:32.613 ] 00:15:32.613 }' 00:15:32.613 01:35:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.613 01:35:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.183 [2024-11-17 01:35:41.373150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.183 [2024-11-17 01:35:41.373340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.183 [2024-11-17 01:35:41.465535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.183 [2024-11-17 01:35:41.525453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:33.183 [2024-11-17 01:35:41.525543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.183 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.444 BaseBdev2 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.444 [ 00:15:33.444 { 00:15:33.444 "name": "BaseBdev2", 00:15:33.444 "aliases": [ 00:15:33.444 "3eaede1f-63cf-4732-8633-3eed05842643" 00:15:33.444 ], 00:15:33.444 "product_name": "Malloc disk", 00:15:33.444 "block_size": 512, 00:15:33.444 "num_blocks": 65536, 00:15:33.444 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:33.444 "assigned_rate_limits": { 00:15:33.444 "rw_ios_per_sec": 0, 00:15:33.444 "rw_mbytes_per_sec": 0, 00:15:33.444 "r_mbytes_per_sec": 0, 00:15:33.444 "w_mbytes_per_sec": 0 00:15:33.444 }, 00:15:33.444 "claimed": false, 00:15:33.444 "zoned": false, 00:15:33.444 "supported_io_types": { 00:15:33.444 "read": true, 00:15:33.444 "write": true, 00:15:33.444 "unmap": true, 00:15:33.444 "flush": true, 00:15:33.444 "reset": true, 00:15:33.444 "nvme_admin": false, 00:15:33.444 "nvme_io": false, 00:15:33.444 "nvme_io_md": false, 00:15:33.444 "write_zeroes": true, 00:15:33.444 "zcopy": true, 00:15:33.444 "get_zone_info": false, 00:15:33.444 "zone_management": false, 00:15:33.444 "zone_append": false, 00:15:33.444 "compare": false, 00:15:33.444 "compare_and_write": false, 00:15:33.444 "abort": true, 00:15:33.444 "seek_hole": false, 00:15:33.444 "seek_data": false, 00:15:33.444 "copy": true, 00:15:33.444 "nvme_iov_md": false 00:15:33.444 }, 00:15:33.444 "memory_domains": [ 00:15:33.444 { 00:15:33.444 "dma_device_id": "system", 00:15:33.444 "dma_device_type": 1 00:15:33.444 }, 00:15:33.444 { 00:15:33.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.444 "dma_device_type": 2 00:15:33.444 } 00:15:33.444 ], 00:15:33.444 "driver_specific": {} 00:15:33.444 } 00:15:33.444 ] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.444 BaseBdev3 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.444 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.444 [ 00:15:33.444 { 00:15:33.444 "name": "BaseBdev3", 00:15:33.444 "aliases": [ 00:15:33.444 "f7812f70-952a-471b-9ce1-f86a62f51f3c" 00:15:33.444 ], 00:15:33.444 "product_name": "Malloc disk", 00:15:33.444 "block_size": 512, 00:15:33.444 "num_blocks": 65536, 00:15:33.445 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:33.445 "assigned_rate_limits": { 00:15:33.445 "rw_ios_per_sec": 0, 00:15:33.445 "rw_mbytes_per_sec": 0, 00:15:33.445 "r_mbytes_per_sec": 0, 00:15:33.445 "w_mbytes_per_sec": 0 00:15:33.445 }, 00:15:33.445 "claimed": false, 00:15:33.445 "zoned": false, 00:15:33.445 "supported_io_types": { 00:15:33.445 "read": true, 00:15:33.445 "write": true, 00:15:33.445 "unmap": true, 00:15:33.445 "flush": true, 00:15:33.445 "reset": true, 00:15:33.445 "nvme_admin": false, 00:15:33.445 "nvme_io": false, 00:15:33.445 "nvme_io_md": false, 00:15:33.445 "write_zeroes": true, 00:15:33.445 "zcopy": true, 00:15:33.445 "get_zone_info": false, 00:15:33.445 "zone_management": false, 00:15:33.445 "zone_append": false, 00:15:33.445 "compare": false, 00:15:33.445 "compare_and_write": false, 00:15:33.445 "abort": true, 00:15:33.445 "seek_hole": false, 00:15:33.445 "seek_data": false, 00:15:33.445 "copy": true, 00:15:33.445 "nvme_iov_md": false 00:15:33.445 }, 00:15:33.445 "memory_domains": [ 00:15:33.445 { 00:15:33.445 "dma_device_id": "system", 00:15:33.445 "dma_device_type": 1 00:15:33.445 }, 00:15:33.445 { 00:15:33.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.445 "dma_device_type": 2 00:15:33.445 } 00:15:33.445 ], 00:15:33.445 "driver_specific": {} 00:15:33.445 } 00:15:33.445 ] 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.445 [2024-11-17 01:35:41.838056] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.445 [2024-11-17 01:35:41.838152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.445 [2024-11-17 01:35:41.838222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.445 [2024-11-17 01:35:41.840026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.445 "name": "Existed_Raid", 00:15:33.445 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:33.445 "strip_size_kb": 64, 00:15:33.445 "state": "configuring", 00:15:33.445 "raid_level": "raid5f", 00:15:33.445 "superblock": true, 00:15:33.445 "num_base_bdevs": 3, 00:15:33.445 "num_base_bdevs_discovered": 2, 00:15:33.445 "num_base_bdevs_operational": 3, 00:15:33.445 "base_bdevs_list": [ 00:15:33.445 { 00:15:33.445 "name": "BaseBdev1", 00:15:33.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.445 "is_configured": false, 00:15:33.445 "data_offset": 0, 00:15:33.445 "data_size": 0 00:15:33.445 }, 00:15:33.445 { 00:15:33.445 "name": "BaseBdev2", 00:15:33.445 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:33.445 "is_configured": true, 00:15:33.445 "data_offset": 2048, 00:15:33.445 "data_size": 63488 00:15:33.445 }, 00:15:33.445 { 00:15:33.445 "name": "BaseBdev3", 00:15:33.445 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:33.445 "is_configured": true, 00:15:33.445 "data_offset": 2048, 00:15:33.445 "data_size": 63488 00:15:33.445 } 00:15:33.445 ] 00:15:33.445 }' 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.445 01:35:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.015 [2024-11-17 01:35:42.305277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.015 "name": "Existed_Raid", 00:15:34.015 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:34.015 "strip_size_kb": 64, 00:15:34.015 "state": "configuring", 00:15:34.015 "raid_level": "raid5f", 00:15:34.015 "superblock": true, 00:15:34.015 "num_base_bdevs": 3, 00:15:34.015 "num_base_bdevs_discovered": 1, 00:15:34.015 "num_base_bdevs_operational": 3, 00:15:34.015 "base_bdevs_list": [ 00:15:34.015 { 00:15:34.015 "name": "BaseBdev1", 00:15:34.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.015 "is_configured": false, 00:15:34.015 "data_offset": 0, 00:15:34.015 "data_size": 0 00:15:34.015 }, 00:15:34.015 { 00:15:34.015 "name": null, 00:15:34.015 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:34.015 "is_configured": false, 00:15:34.015 "data_offset": 0, 00:15:34.015 "data_size": 63488 00:15:34.015 }, 00:15:34.015 { 00:15:34.015 "name": "BaseBdev3", 00:15:34.015 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:34.015 "is_configured": true, 00:15:34.015 "data_offset": 2048, 00:15:34.015 "data_size": 63488 00:15:34.015 } 00:15:34.015 ] 00:15:34.015 }' 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.015 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.595 [2024-11-17 01:35:42.852476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.595 BaseBdev1 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.595 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.595 [ 00:15:34.595 { 00:15:34.595 "name": "BaseBdev1", 00:15:34.595 "aliases": [ 00:15:34.595 "ea483d3e-ac33-4049-91b5-4327047cb6b2" 00:15:34.595 ], 00:15:34.595 "product_name": "Malloc disk", 00:15:34.595 "block_size": 512, 00:15:34.595 "num_blocks": 65536, 00:15:34.595 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:34.595 "assigned_rate_limits": { 00:15:34.595 "rw_ios_per_sec": 0, 00:15:34.595 "rw_mbytes_per_sec": 0, 00:15:34.595 "r_mbytes_per_sec": 0, 00:15:34.595 "w_mbytes_per_sec": 0 00:15:34.595 }, 00:15:34.595 "claimed": true, 00:15:34.595 "claim_type": "exclusive_write", 00:15:34.595 "zoned": false, 00:15:34.595 "supported_io_types": { 00:15:34.595 "read": true, 00:15:34.595 "write": true, 00:15:34.595 "unmap": true, 00:15:34.595 "flush": true, 00:15:34.595 "reset": true, 00:15:34.595 "nvme_admin": false, 00:15:34.595 "nvme_io": false, 00:15:34.595 "nvme_io_md": false, 00:15:34.595 "write_zeroes": true, 00:15:34.595 "zcopy": true, 00:15:34.595 "get_zone_info": false, 00:15:34.595 "zone_management": false, 00:15:34.595 "zone_append": false, 00:15:34.595 "compare": false, 00:15:34.595 "compare_and_write": false, 00:15:34.595 "abort": true, 00:15:34.595 "seek_hole": false, 00:15:34.595 "seek_data": false, 00:15:34.595 "copy": true, 00:15:34.595 "nvme_iov_md": false 00:15:34.595 }, 00:15:34.596 "memory_domains": [ 00:15:34.596 { 00:15:34.596 "dma_device_id": "system", 00:15:34.596 "dma_device_type": 1 00:15:34.596 }, 00:15:34.596 { 00:15:34.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.596 "dma_device_type": 2 00:15:34.596 } 00:15:34.596 ], 00:15:34.596 "driver_specific": {} 00:15:34.596 } 00:15:34.596 ] 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.596 "name": "Existed_Raid", 00:15:34.596 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:34.596 "strip_size_kb": 64, 00:15:34.596 "state": "configuring", 00:15:34.596 "raid_level": "raid5f", 00:15:34.596 "superblock": true, 00:15:34.596 "num_base_bdevs": 3, 00:15:34.596 "num_base_bdevs_discovered": 2, 00:15:34.596 "num_base_bdevs_operational": 3, 00:15:34.596 "base_bdevs_list": [ 00:15:34.596 { 00:15:34.596 "name": "BaseBdev1", 00:15:34.596 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:34.596 "is_configured": true, 00:15:34.596 "data_offset": 2048, 00:15:34.596 "data_size": 63488 00:15:34.596 }, 00:15:34.596 { 00:15:34.596 "name": null, 00:15:34.596 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:34.596 "is_configured": false, 00:15:34.596 "data_offset": 0, 00:15:34.596 "data_size": 63488 00:15:34.596 }, 00:15:34.596 { 00:15:34.596 "name": "BaseBdev3", 00:15:34.596 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:34.596 "is_configured": true, 00:15:34.596 "data_offset": 2048, 00:15:34.596 "data_size": 63488 00:15:34.596 } 00:15:34.596 ] 00:15:34.596 }' 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.596 01:35:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.178 [2024-11-17 01:35:43.387603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.178 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.178 "name": "Existed_Raid", 00:15:35.178 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:35.178 "strip_size_kb": 64, 00:15:35.178 "state": "configuring", 00:15:35.178 "raid_level": "raid5f", 00:15:35.178 "superblock": true, 00:15:35.178 "num_base_bdevs": 3, 00:15:35.178 "num_base_bdevs_discovered": 1, 00:15:35.178 "num_base_bdevs_operational": 3, 00:15:35.178 "base_bdevs_list": [ 00:15:35.178 { 00:15:35.178 "name": "BaseBdev1", 00:15:35.178 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:35.179 "is_configured": true, 00:15:35.179 "data_offset": 2048, 00:15:35.179 "data_size": 63488 00:15:35.179 }, 00:15:35.179 { 00:15:35.179 "name": null, 00:15:35.179 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:35.179 "is_configured": false, 00:15:35.179 "data_offset": 0, 00:15:35.179 "data_size": 63488 00:15:35.179 }, 00:15:35.179 { 00:15:35.179 "name": null, 00:15:35.179 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:35.179 "is_configured": false, 00:15:35.179 "data_offset": 0, 00:15:35.179 "data_size": 63488 00:15:35.179 } 00:15:35.179 ] 00:15:35.179 }' 00:15:35.179 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.179 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.438 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.438 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.438 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:35.438 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.439 [2024-11-17 01:35:43.862896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.439 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.700 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.700 "name": "Existed_Raid", 00:15:35.700 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:35.700 "strip_size_kb": 64, 00:15:35.700 "state": "configuring", 00:15:35.700 "raid_level": "raid5f", 00:15:35.700 "superblock": true, 00:15:35.700 "num_base_bdevs": 3, 00:15:35.700 "num_base_bdevs_discovered": 2, 00:15:35.700 "num_base_bdevs_operational": 3, 00:15:35.700 "base_bdevs_list": [ 00:15:35.700 { 00:15:35.700 "name": "BaseBdev1", 00:15:35.700 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:35.700 "is_configured": true, 00:15:35.700 "data_offset": 2048, 00:15:35.700 "data_size": 63488 00:15:35.700 }, 00:15:35.700 { 00:15:35.700 "name": null, 00:15:35.700 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:35.700 "is_configured": false, 00:15:35.700 "data_offset": 0, 00:15:35.700 "data_size": 63488 00:15:35.700 }, 00:15:35.700 { 00:15:35.700 "name": "BaseBdev3", 00:15:35.700 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:35.700 "is_configured": true, 00:15:35.700 "data_offset": 2048, 00:15:35.700 "data_size": 63488 00:15:35.700 } 00:15:35.700 ] 00:15:35.700 }' 00:15:35.700 01:35:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.700 01:35:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 [2024-11-17 01:35:44.310133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.960 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.221 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.221 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.221 "name": "Existed_Raid", 00:15:36.221 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:36.221 "strip_size_kb": 64, 00:15:36.221 "state": "configuring", 00:15:36.221 "raid_level": "raid5f", 00:15:36.221 "superblock": true, 00:15:36.221 "num_base_bdevs": 3, 00:15:36.221 "num_base_bdevs_discovered": 1, 00:15:36.221 "num_base_bdevs_operational": 3, 00:15:36.221 "base_bdevs_list": [ 00:15:36.221 { 00:15:36.221 "name": null, 00:15:36.221 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:36.221 "is_configured": false, 00:15:36.221 "data_offset": 0, 00:15:36.221 "data_size": 63488 00:15:36.221 }, 00:15:36.221 { 00:15:36.221 "name": null, 00:15:36.221 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:36.221 "is_configured": false, 00:15:36.221 "data_offset": 0, 00:15:36.221 "data_size": 63488 00:15:36.221 }, 00:15:36.221 { 00:15:36.221 "name": "BaseBdev3", 00:15:36.221 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:36.221 "is_configured": true, 00:15:36.221 "data_offset": 2048, 00:15:36.221 "data_size": 63488 00:15:36.221 } 00:15:36.221 ] 00:15:36.221 }' 00:15:36.221 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.221 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 [2024-11-17 01:35:44.901651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.481 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.741 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.741 "name": "Existed_Raid", 00:15:36.741 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:36.741 "strip_size_kb": 64, 00:15:36.741 "state": "configuring", 00:15:36.741 "raid_level": "raid5f", 00:15:36.741 "superblock": true, 00:15:36.741 "num_base_bdevs": 3, 00:15:36.741 "num_base_bdevs_discovered": 2, 00:15:36.741 "num_base_bdevs_operational": 3, 00:15:36.741 "base_bdevs_list": [ 00:15:36.741 { 00:15:36.741 "name": null, 00:15:36.741 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:36.741 "is_configured": false, 00:15:36.741 "data_offset": 0, 00:15:36.741 "data_size": 63488 00:15:36.741 }, 00:15:36.741 { 00:15:36.741 "name": "BaseBdev2", 00:15:36.741 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:36.741 "is_configured": true, 00:15:36.741 "data_offset": 2048, 00:15:36.741 "data_size": 63488 00:15:36.741 }, 00:15:36.741 { 00:15:36.741 "name": "BaseBdev3", 00:15:36.741 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:36.741 "is_configured": true, 00:15:36.741 "data_offset": 2048, 00:15:36.741 "data_size": 63488 00:15:36.741 } 00:15:36.741 ] 00:15:36.741 }' 00:15:36.741 01:35:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.741 01:35:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ea483d3e-ac33-4049-91b5-4327047cb6b2 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.001 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.261 [2024-11-17 01:35:45.480392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:37.261 [2024-11-17 01:35:45.480705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:37.261 [2024-11-17 01:35:45.480773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:37.261 [2024-11-17 01:35:45.481068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:37.261 NewBaseBdev 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.261 [2024-11-17 01:35:45.486392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:37.261 [2024-11-17 01:35:45.486453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:37.261 [2024-11-17 01:35:45.486660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.261 [ 00:15:37.261 { 00:15:37.261 "name": "NewBaseBdev", 00:15:37.261 "aliases": [ 00:15:37.261 "ea483d3e-ac33-4049-91b5-4327047cb6b2" 00:15:37.261 ], 00:15:37.261 "product_name": "Malloc disk", 00:15:37.261 "block_size": 512, 00:15:37.261 "num_blocks": 65536, 00:15:37.261 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:37.261 "assigned_rate_limits": { 00:15:37.261 "rw_ios_per_sec": 0, 00:15:37.261 "rw_mbytes_per_sec": 0, 00:15:37.261 "r_mbytes_per_sec": 0, 00:15:37.261 "w_mbytes_per_sec": 0 00:15:37.261 }, 00:15:37.261 "claimed": true, 00:15:37.261 "claim_type": "exclusive_write", 00:15:37.261 "zoned": false, 00:15:37.261 "supported_io_types": { 00:15:37.261 "read": true, 00:15:37.261 "write": true, 00:15:37.261 "unmap": true, 00:15:37.261 "flush": true, 00:15:37.261 "reset": true, 00:15:37.261 "nvme_admin": false, 00:15:37.261 "nvme_io": false, 00:15:37.261 "nvme_io_md": false, 00:15:37.261 "write_zeroes": true, 00:15:37.261 "zcopy": true, 00:15:37.261 "get_zone_info": false, 00:15:37.261 "zone_management": false, 00:15:37.261 "zone_append": false, 00:15:37.261 "compare": false, 00:15:37.261 "compare_and_write": false, 00:15:37.261 "abort": true, 00:15:37.261 "seek_hole": false, 00:15:37.261 "seek_data": false, 00:15:37.261 "copy": true, 00:15:37.261 "nvme_iov_md": false 00:15:37.261 }, 00:15:37.261 "memory_domains": [ 00:15:37.261 { 00:15:37.261 "dma_device_id": "system", 00:15:37.261 "dma_device_type": 1 00:15:37.261 }, 00:15:37.261 { 00:15:37.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.261 "dma_device_type": 2 00:15:37.261 } 00:15:37.261 ], 00:15:37.261 "driver_specific": {} 00:15:37.261 } 00:15:37.261 ] 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.261 "name": "Existed_Raid", 00:15:37.261 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:37.261 "strip_size_kb": 64, 00:15:37.261 "state": "online", 00:15:37.261 "raid_level": "raid5f", 00:15:37.261 "superblock": true, 00:15:37.261 "num_base_bdevs": 3, 00:15:37.261 "num_base_bdevs_discovered": 3, 00:15:37.261 "num_base_bdevs_operational": 3, 00:15:37.261 "base_bdevs_list": [ 00:15:37.261 { 00:15:37.261 "name": "NewBaseBdev", 00:15:37.261 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:37.261 "is_configured": true, 00:15:37.261 "data_offset": 2048, 00:15:37.261 "data_size": 63488 00:15:37.261 }, 00:15:37.261 { 00:15:37.261 "name": "BaseBdev2", 00:15:37.261 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:37.261 "is_configured": true, 00:15:37.261 "data_offset": 2048, 00:15:37.261 "data_size": 63488 00:15:37.261 }, 00:15:37.261 { 00:15:37.261 "name": "BaseBdev3", 00:15:37.261 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:37.261 "is_configured": true, 00:15:37.261 "data_offset": 2048, 00:15:37.261 "data_size": 63488 00:15:37.261 } 00:15:37.261 ] 00:15:37.261 }' 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.261 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.521 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.522 [2024-11-17 01:35:45.924070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.522 "name": "Existed_Raid", 00:15:37.522 "aliases": [ 00:15:37.522 "b84ef724-9c9a-4c3b-9062-51ee6062dad4" 00:15:37.522 ], 00:15:37.522 "product_name": "Raid Volume", 00:15:37.522 "block_size": 512, 00:15:37.522 "num_blocks": 126976, 00:15:37.522 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:37.522 "assigned_rate_limits": { 00:15:37.522 "rw_ios_per_sec": 0, 00:15:37.522 "rw_mbytes_per_sec": 0, 00:15:37.522 "r_mbytes_per_sec": 0, 00:15:37.522 "w_mbytes_per_sec": 0 00:15:37.522 }, 00:15:37.522 "claimed": false, 00:15:37.522 "zoned": false, 00:15:37.522 "supported_io_types": { 00:15:37.522 "read": true, 00:15:37.522 "write": true, 00:15:37.522 "unmap": false, 00:15:37.522 "flush": false, 00:15:37.522 "reset": true, 00:15:37.522 "nvme_admin": false, 00:15:37.522 "nvme_io": false, 00:15:37.522 "nvme_io_md": false, 00:15:37.522 "write_zeroes": true, 00:15:37.522 "zcopy": false, 00:15:37.522 "get_zone_info": false, 00:15:37.522 "zone_management": false, 00:15:37.522 "zone_append": false, 00:15:37.522 "compare": false, 00:15:37.522 "compare_and_write": false, 00:15:37.522 "abort": false, 00:15:37.522 "seek_hole": false, 00:15:37.522 "seek_data": false, 00:15:37.522 "copy": false, 00:15:37.522 "nvme_iov_md": false 00:15:37.522 }, 00:15:37.522 "driver_specific": { 00:15:37.522 "raid": { 00:15:37.522 "uuid": "b84ef724-9c9a-4c3b-9062-51ee6062dad4", 00:15:37.522 "strip_size_kb": 64, 00:15:37.522 "state": "online", 00:15:37.522 "raid_level": "raid5f", 00:15:37.522 "superblock": true, 00:15:37.522 "num_base_bdevs": 3, 00:15:37.522 "num_base_bdevs_discovered": 3, 00:15:37.522 "num_base_bdevs_operational": 3, 00:15:37.522 "base_bdevs_list": [ 00:15:37.522 { 00:15:37.522 "name": "NewBaseBdev", 00:15:37.522 "uuid": "ea483d3e-ac33-4049-91b5-4327047cb6b2", 00:15:37.522 "is_configured": true, 00:15:37.522 "data_offset": 2048, 00:15:37.522 "data_size": 63488 00:15:37.522 }, 00:15:37.522 { 00:15:37.522 "name": "BaseBdev2", 00:15:37.522 "uuid": "3eaede1f-63cf-4732-8633-3eed05842643", 00:15:37.522 "is_configured": true, 00:15:37.522 "data_offset": 2048, 00:15:37.522 "data_size": 63488 00:15:37.522 }, 00:15:37.522 { 00:15:37.522 "name": "BaseBdev3", 00:15:37.522 "uuid": "f7812f70-952a-471b-9ce1-f86a62f51f3c", 00:15:37.522 "is_configured": true, 00:15:37.522 "data_offset": 2048, 00:15:37.522 "data_size": 63488 00:15:37.522 } 00:15:37.522 ] 00:15:37.522 } 00:15:37.522 } 00:15:37.522 }' 00:15:37.522 01:35:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:37.783 BaseBdev2 00:15:37.783 BaseBdev3' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 [2024-11-17 01:35:46.231364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.783 [2024-11-17 01:35:46.231388] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.783 [2024-11-17 01:35:46.231456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.783 [2024-11-17 01:35:46.231722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.783 [2024-11-17 01:35:46.231735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80249 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80249 ']' 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80249 00:15:37.783 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:38.043 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.043 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80249 00:15:38.043 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.043 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.043 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80249' 00:15:38.043 killing process with pid 80249 00:15:38.043 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80249 00:15:38.043 [2024-11-17 01:35:46.277541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.043 01:35:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80249 00:15:38.303 [2024-11-17 01:35:46.562560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.244 ************************************ 00:15:39.244 END TEST raid5f_state_function_test_sb 00:15:39.244 ************************************ 00:15:39.244 01:35:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:39.244 00:15:39.244 real 0m10.569s 00:15:39.244 user 0m16.783s 00:15:39.244 sys 0m2.055s 00:15:39.244 01:35:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.244 01:35:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 01:35:47 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:39.244 01:35:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:39.244 01:35:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.244 01:35:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 ************************************ 00:15:39.244 START TEST raid5f_superblock_test 00:15:39.244 ************************************ 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80874 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80874 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80874 ']' 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.244 01:35:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.505 [2024-11-17 01:35:47.767508] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:39.505 [2024-11-17 01:35:47.767639] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80874 ] 00:15:39.505 [2024-11-17 01:35:47.947302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.765 [2024-11-17 01:35:48.051979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.025 [2024-11-17 01:35:48.246336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.025 [2024-11-17 01:35:48.246392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.285 malloc1 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.285 [2024-11-17 01:35:48.631041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.285 [2024-11-17 01:35:48.631164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.285 [2024-11-17 01:35:48.631214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:40.285 [2024-11-17 01:35:48.631253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.285 [2024-11-17 01:35:48.633343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.285 [2024-11-17 01:35:48.633422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.285 pt1 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.285 malloc2 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.285 [2024-11-17 01:35:48.687801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.285 [2024-11-17 01:35:48.687853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.285 [2024-11-17 01:35:48.687875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:40.285 [2024-11-17 01:35:48.687883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.285 [2024-11-17 01:35:48.689883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.285 [2024-11-17 01:35:48.689918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.285 pt2 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.285 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.546 malloc3 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.546 [2024-11-17 01:35:48.774916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:40.546 [2024-11-17 01:35:48.775029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.546 [2024-11-17 01:35:48.775081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:40.546 [2024-11-17 01:35:48.775116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.546 [2024-11-17 01:35:48.777132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.546 [2024-11-17 01:35:48.777206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:40.546 pt3 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.546 [2024-11-17 01:35:48.786946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:40.546 [2024-11-17 01:35:48.788718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.546 [2024-11-17 01:35:48.788855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:40.546 [2024-11-17 01:35:48.789052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:40.546 [2024-11-17 01:35:48.789110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:40.546 [2024-11-17 01:35:48.789362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:40.546 [2024-11-17 01:35:48.794703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:40.546 [2024-11-17 01:35:48.794773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:40.546 [2024-11-17 01:35:48.795023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.546 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.546 "name": "raid_bdev1", 00:15:40.546 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:40.546 "strip_size_kb": 64, 00:15:40.546 "state": "online", 00:15:40.546 "raid_level": "raid5f", 00:15:40.546 "superblock": true, 00:15:40.546 "num_base_bdevs": 3, 00:15:40.546 "num_base_bdevs_discovered": 3, 00:15:40.546 "num_base_bdevs_operational": 3, 00:15:40.546 "base_bdevs_list": [ 00:15:40.546 { 00:15:40.546 "name": "pt1", 00:15:40.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.546 "is_configured": true, 00:15:40.546 "data_offset": 2048, 00:15:40.546 "data_size": 63488 00:15:40.546 }, 00:15:40.546 { 00:15:40.546 "name": "pt2", 00:15:40.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.546 "is_configured": true, 00:15:40.546 "data_offset": 2048, 00:15:40.546 "data_size": 63488 00:15:40.546 }, 00:15:40.546 { 00:15:40.546 "name": "pt3", 00:15:40.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.547 "is_configured": true, 00:15:40.547 "data_offset": 2048, 00:15:40.547 "data_size": 63488 00:15:40.547 } 00:15:40.547 ] 00:15:40.547 }' 00:15:40.547 01:35:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.547 01:35:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.806 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:40.807 [2024-11-17 01:35:49.228698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.807 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.065 "name": "raid_bdev1", 00:15:41.065 "aliases": [ 00:15:41.065 "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a" 00:15:41.065 ], 00:15:41.065 "product_name": "Raid Volume", 00:15:41.065 "block_size": 512, 00:15:41.065 "num_blocks": 126976, 00:15:41.065 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:41.065 "assigned_rate_limits": { 00:15:41.065 "rw_ios_per_sec": 0, 00:15:41.065 "rw_mbytes_per_sec": 0, 00:15:41.065 "r_mbytes_per_sec": 0, 00:15:41.065 "w_mbytes_per_sec": 0 00:15:41.065 }, 00:15:41.065 "claimed": false, 00:15:41.065 "zoned": false, 00:15:41.065 "supported_io_types": { 00:15:41.065 "read": true, 00:15:41.065 "write": true, 00:15:41.065 "unmap": false, 00:15:41.065 "flush": false, 00:15:41.065 "reset": true, 00:15:41.065 "nvme_admin": false, 00:15:41.065 "nvme_io": false, 00:15:41.065 "nvme_io_md": false, 00:15:41.065 "write_zeroes": true, 00:15:41.065 "zcopy": false, 00:15:41.065 "get_zone_info": false, 00:15:41.065 "zone_management": false, 00:15:41.065 "zone_append": false, 00:15:41.065 "compare": false, 00:15:41.065 "compare_and_write": false, 00:15:41.065 "abort": false, 00:15:41.065 "seek_hole": false, 00:15:41.065 "seek_data": false, 00:15:41.065 "copy": false, 00:15:41.065 "nvme_iov_md": false 00:15:41.065 }, 00:15:41.065 "driver_specific": { 00:15:41.065 "raid": { 00:15:41.065 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:41.065 "strip_size_kb": 64, 00:15:41.065 "state": "online", 00:15:41.065 "raid_level": "raid5f", 00:15:41.065 "superblock": true, 00:15:41.065 "num_base_bdevs": 3, 00:15:41.065 "num_base_bdevs_discovered": 3, 00:15:41.065 "num_base_bdevs_operational": 3, 00:15:41.065 "base_bdevs_list": [ 00:15:41.065 { 00:15:41.065 "name": "pt1", 00:15:41.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.065 "is_configured": true, 00:15:41.065 "data_offset": 2048, 00:15:41.065 "data_size": 63488 00:15:41.065 }, 00:15:41.065 { 00:15:41.065 "name": "pt2", 00:15:41.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.065 "is_configured": true, 00:15:41.065 "data_offset": 2048, 00:15:41.065 "data_size": 63488 00:15:41.065 }, 00:15:41.065 { 00:15:41.065 "name": "pt3", 00:15:41.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.065 "is_configured": true, 00:15:41.065 "data_offset": 2048, 00:15:41.065 "data_size": 63488 00:15:41.065 } 00:15:41.065 ] 00:15:41.065 } 00:15:41.065 } 00:15:41.065 }' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:41.065 pt2 00:15:41.065 pt3' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.065 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 [2024-11-17 01:35:49.528132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a ']' 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 [2024-11-17 01:35:49.571897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.325 [2024-11-17 01:35:49.571966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.325 [2024-11-17 01:35:49.572062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.325 [2024-11-17 01:35:49.572167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.325 [2024-11-17 01:35:49.572224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 [2024-11-17 01:35:49.723696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:41.325 [2024-11-17 01:35:49.725520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:41.325 [2024-11-17 01:35:49.725628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:41.325 [2024-11-17 01:35:49.725703] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:41.325 [2024-11-17 01:35:49.725834] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:41.325 [2024-11-17 01:35:49.725922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:41.325 [2024-11-17 01:35:49.726002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.325 [2024-11-17 01:35:49.726038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:41.325 request: 00:15:41.325 { 00:15:41.325 "name": "raid_bdev1", 00:15:41.325 "raid_level": "raid5f", 00:15:41.325 "base_bdevs": [ 00:15:41.325 "malloc1", 00:15:41.325 "malloc2", 00:15:41.325 "malloc3" 00:15:41.325 ], 00:15:41.325 "strip_size_kb": 64, 00:15:41.325 "superblock": false, 00:15:41.325 "method": "bdev_raid_create", 00:15:41.325 "req_id": 1 00:15:41.325 } 00:15:41.325 Got JSON-RPC error response 00:15:41.325 response: 00:15:41.325 { 00:15:41.325 "code": -17, 00:15:41.325 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:41.325 } 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.325 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.586 [2024-11-17 01:35:49.791527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:41.586 [2024-11-17 01:35:49.791612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.586 [2024-11-17 01:35:49.791649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:41.586 [2024-11-17 01:35:49.791683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.586 [2024-11-17 01:35:49.793727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.586 [2024-11-17 01:35:49.793813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:41.586 [2024-11-17 01:35:49.793919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:41.586 [2024-11-17 01:35:49.794009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.586 pt1 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.586 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.586 "name": "raid_bdev1", 00:15:41.586 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:41.586 "strip_size_kb": 64, 00:15:41.586 "state": "configuring", 00:15:41.586 "raid_level": "raid5f", 00:15:41.586 "superblock": true, 00:15:41.586 "num_base_bdevs": 3, 00:15:41.586 "num_base_bdevs_discovered": 1, 00:15:41.586 "num_base_bdevs_operational": 3, 00:15:41.586 "base_bdevs_list": [ 00:15:41.586 { 00:15:41.586 "name": "pt1", 00:15:41.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.586 "is_configured": true, 00:15:41.586 "data_offset": 2048, 00:15:41.586 "data_size": 63488 00:15:41.586 }, 00:15:41.586 { 00:15:41.586 "name": null, 00:15:41.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.586 "is_configured": false, 00:15:41.586 "data_offset": 2048, 00:15:41.586 "data_size": 63488 00:15:41.586 }, 00:15:41.587 { 00:15:41.587 "name": null, 00:15:41.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.587 "is_configured": false, 00:15:41.587 "data_offset": 2048, 00:15:41.587 "data_size": 63488 00:15:41.587 } 00:15:41.587 ] 00:15:41.587 }' 00:15:41.587 01:35:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.587 01:35:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.847 [2024-11-17 01:35:50.230958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:41.847 [2024-11-17 01:35:50.231062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.847 [2024-11-17 01:35:50.231118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:41.847 [2024-11-17 01:35:50.231178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.847 [2024-11-17 01:35:50.231632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.847 [2024-11-17 01:35:50.231706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:41.847 [2024-11-17 01:35:50.231845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:41.847 [2024-11-17 01:35:50.231909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:41.847 pt2 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.847 [2024-11-17 01:35:50.242938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.847 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.847 "name": "raid_bdev1", 00:15:41.847 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:41.847 "strip_size_kb": 64, 00:15:41.847 "state": "configuring", 00:15:41.847 "raid_level": "raid5f", 00:15:41.847 "superblock": true, 00:15:41.847 "num_base_bdevs": 3, 00:15:41.847 "num_base_bdevs_discovered": 1, 00:15:41.847 "num_base_bdevs_operational": 3, 00:15:41.847 "base_bdevs_list": [ 00:15:41.847 { 00:15:41.847 "name": "pt1", 00:15:41.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.847 "is_configured": true, 00:15:41.847 "data_offset": 2048, 00:15:41.847 "data_size": 63488 00:15:41.847 }, 00:15:41.847 { 00:15:41.847 "name": null, 00:15:41.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.847 "is_configured": false, 00:15:41.847 "data_offset": 0, 00:15:41.847 "data_size": 63488 00:15:41.847 }, 00:15:41.847 { 00:15:41.847 "name": null, 00:15:41.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.847 "is_configured": false, 00:15:41.847 "data_offset": 2048, 00:15:41.847 "data_size": 63488 00:15:41.847 } 00:15:41.847 ] 00:15:41.847 }' 00:15:41.848 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.848 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.417 [2024-11-17 01:35:50.698158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.417 [2024-11-17 01:35:50.698264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.417 [2024-11-17 01:35:50.698315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:42.417 [2024-11-17 01:35:50.698358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.417 [2024-11-17 01:35:50.698830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.417 [2024-11-17 01:35:50.698904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.417 [2024-11-17 01:35:50.699020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:42.417 [2024-11-17 01:35:50.699084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.417 pt2 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.417 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.417 [2024-11-17 01:35:50.710124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:42.417 [2024-11-17 01:35:50.710215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.417 [2024-11-17 01:35:50.710260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:42.417 [2024-11-17 01:35:50.710276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.418 [2024-11-17 01:35:50.710623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.418 [2024-11-17 01:35:50.710646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:42.418 [2024-11-17 01:35:50.710705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:42.418 [2024-11-17 01:35:50.710723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:42.418 [2024-11-17 01:35:50.710861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:42.418 [2024-11-17 01:35:50.710873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:42.418 [2024-11-17 01:35:50.711119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:42.418 pt3 00:15:42.418 [2024-11-17 01:35:50.716351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:42.418 [2024-11-17 01:35:50.716372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:42.418 [2024-11-17 01:35:50.716542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.418 "name": "raid_bdev1", 00:15:42.418 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:42.418 "strip_size_kb": 64, 00:15:42.418 "state": "online", 00:15:42.418 "raid_level": "raid5f", 00:15:42.418 "superblock": true, 00:15:42.418 "num_base_bdevs": 3, 00:15:42.418 "num_base_bdevs_discovered": 3, 00:15:42.418 "num_base_bdevs_operational": 3, 00:15:42.418 "base_bdevs_list": [ 00:15:42.418 { 00:15:42.418 "name": "pt1", 00:15:42.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.418 "is_configured": true, 00:15:42.418 "data_offset": 2048, 00:15:42.418 "data_size": 63488 00:15:42.418 }, 00:15:42.418 { 00:15:42.418 "name": "pt2", 00:15:42.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.418 "is_configured": true, 00:15:42.418 "data_offset": 2048, 00:15:42.418 "data_size": 63488 00:15:42.418 }, 00:15:42.418 { 00:15:42.418 "name": "pt3", 00:15:42.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.418 "is_configured": true, 00:15:42.418 "data_offset": 2048, 00:15:42.418 "data_size": 63488 00:15:42.418 } 00:15:42.418 ] 00:15:42.418 }' 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.418 01:35:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.678 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.678 [2024-11-17 01:35:51.130331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.938 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.938 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.939 "name": "raid_bdev1", 00:15:42.939 "aliases": [ 00:15:42.939 "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a" 00:15:42.939 ], 00:15:42.939 "product_name": "Raid Volume", 00:15:42.939 "block_size": 512, 00:15:42.939 "num_blocks": 126976, 00:15:42.939 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:42.939 "assigned_rate_limits": { 00:15:42.939 "rw_ios_per_sec": 0, 00:15:42.939 "rw_mbytes_per_sec": 0, 00:15:42.939 "r_mbytes_per_sec": 0, 00:15:42.939 "w_mbytes_per_sec": 0 00:15:42.939 }, 00:15:42.939 "claimed": false, 00:15:42.939 "zoned": false, 00:15:42.939 "supported_io_types": { 00:15:42.939 "read": true, 00:15:42.939 "write": true, 00:15:42.939 "unmap": false, 00:15:42.939 "flush": false, 00:15:42.939 "reset": true, 00:15:42.939 "nvme_admin": false, 00:15:42.939 "nvme_io": false, 00:15:42.939 "nvme_io_md": false, 00:15:42.939 "write_zeroes": true, 00:15:42.939 "zcopy": false, 00:15:42.939 "get_zone_info": false, 00:15:42.939 "zone_management": false, 00:15:42.939 "zone_append": false, 00:15:42.939 "compare": false, 00:15:42.939 "compare_and_write": false, 00:15:42.939 "abort": false, 00:15:42.939 "seek_hole": false, 00:15:42.939 "seek_data": false, 00:15:42.939 "copy": false, 00:15:42.939 "nvme_iov_md": false 00:15:42.939 }, 00:15:42.939 "driver_specific": { 00:15:42.939 "raid": { 00:15:42.939 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:42.939 "strip_size_kb": 64, 00:15:42.939 "state": "online", 00:15:42.939 "raid_level": "raid5f", 00:15:42.939 "superblock": true, 00:15:42.939 "num_base_bdevs": 3, 00:15:42.939 "num_base_bdevs_discovered": 3, 00:15:42.939 "num_base_bdevs_operational": 3, 00:15:42.939 "base_bdevs_list": [ 00:15:42.939 { 00:15:42.939 "name": "pt1", 00:15:42.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.939 "is_configured": true, 00:15:42.939 "data_offset": 2048, 00:15:42.939 "data_size": 63488 00:15:42.939 }, 00:15:42.939 { 00:15:42.939 "name": "pt2", 00:15:42.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.939 "is_configured": true, 00:15:42.939 "data_offset": 2048, 00:15:42.939 "data_size": 63488 00:15:42.939 }, 00:15:42.939 { 00:15:42.939 "name": "pt3", 00:15:42.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.939 "is_configured": true, 00:15:42.939 "data_offset": 2048, 00:15:42.939 "data_size": 63488 00:15:42.939 } 00:15:42.939 ] 00:15:42.939 } 00:15:42.939 } 00:15:42.939 }' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:42.939 pt2 00:15:42.939 pt3' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.939 [2024-11-17 01:35:51.377887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.939 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a '!=' 63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a ']' 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.200 [2024-11-17 01:35:51.425684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.200 "name": "raid_bdev1", 00:15:43.200 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:43.200 "strip_size_kb": 64, 00:15:43.200 "state": "online", 00:15:43.200 "raid_level": "raid5f", 00:15:43.200 "superblock": true, 00:15:43.200 "num_base_bdevs": 3, 00:15:43.200 "num_base_bdevs_discovered": 2, 00:15:43.200 "num_base_bdevs_operational": 2, 00:15:43.200 "base_bdevs_list": [ 00:15:43.200 { 00:15:43.200 "name": null, 00:15:43.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.200 "is_configured": false, 00:15:43.200 "data_offset": 0, 00:15:43.200 "data_size": 63488 00:15:43.200 }, 00:15:43.200 { 00:15:43.200 "name": "pt2", 00:15:43.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.200 "is_configured": true, 00:15:43.200 "data_offset": 2048, 00:15:43.200 "data_size": 63488 00:15:43.200 }, 00:15:43.200 { 00:15:43.200 "name": "pt3", 00:15:43.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.200 "is_configured": true, 00:15:43.200 "data_offset": 2048, 00:15:43.200 "data_size": 63488 00:15:43.200 } 00:15:43.200 ] 00:15:43.200 }' 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.200 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.461 [2024-11-17 01:35:51.860898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.461 [2024-11-17 01:35:51.860968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.461 [2024-11-17 01:35:51.861092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.461 [2024-11-17 01:35:51.861199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.461 [2024-11-17 01:35:51.861260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.461 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.721 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.721 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:43.721 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.722 [2024-11-17 01:35:51.948838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.722 [2024-11-17 01:35:51.948928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.722 [2024-11-17 01:35:51.948948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:43.722 [2024-11-17 01:35:51.948958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.722 [2024-11-17 01:35:51.951060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.722 [2024-11-17 01:35:51.951099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.722 [2024-11-17 01:35:51.951180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:43.722 [2024-11-17 01:35:51.951229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.722 pt2 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.722 01:35:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.722 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.722 "name": "raid_bdev1", 00:15:43.722 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:43.722 "strip_size_kb": 64, 00:15:43.722 "state": "configuring", 00:15:43.722 "raid_level": "raid5f", 00:15:43.722 "superblock": true, 00:15:43.722 "num_base_bdevs": 3, 00:15:43.722 "num_base_bdevs_discovered": 1, 00:15:43.722 "num_base_bdevs_operational": 2, 00:15:43.722 "base_bdevs_list": [ 00:15:43.722 { 00:15:43.722 "name": null, 00:15:43.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.722 "is_configured": false, 00:15:43.722 "data_offset": 2048, 00:15:43.722 "data_size": 63488 00:15:43.722 }, 00:15:43.722 { 00:15:43.722 "name": "pt2", 00:15:43.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.722 "is_configured": true, 00:15:43.722 "data_offset": 2048, 00:15:43.722 "data_size": 63488 00:15:43.722 }, 00:15:43.722 { 00:15:43.722 "name": null, 00:15:43.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.722 "is_configured": false, 00:15:43.722 "data_offset": 2048, 00:15:43.722 "data_size": 63488 00:15:43.722 } 00:15:43.722 ] 00:15:43.722 }' 00:15:43.722 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.722 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.982 [2024-11-17 01:35:52.400013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.982 [2024-11-17 01:35:52.400135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.982 [2024-11-17 01:35:52.400178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:43.982 [2024-11-17 01:35:52.400217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.982 [2024-11-17 01:35:52.400692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.982 [2024-11-17 01:35:52.400773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.982 [2024-11-17 01:35:52.400892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:43.982 [2024-11-17 01:35:52.400963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.982 [2024-11-17 01:35:52.401127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:43.982 [2024-11-17 01:35:52.401176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.982 [2024-11-17 01:35:52.401452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:43.982 [2024-11-17 01:35:52.406629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:43.982 [2024-11-17 01:35:52.406686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:43.982 [2024-11-17 01:35:52.407030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.982 pt3 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.982 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.243 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.243 "name": "raid_bdev1", 00:15:44.243 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:44.243 "strip_size_kb": 64, 00:15:44.243 "state": "online", 00:15:44.243 "raid_level": "raid5f", 00:15:44.243 "superblock": true, 00:15:44.243 "num_base_bdevs": 3, 00:15:44.243 "num_base_bdevs_discovered": 2, 00:15:44.243 "num_base_bdevs_operational": 2, 00:15:44.243 "base_bdevs_list": [ 00:15:44.243 { 00:15:44.243 "name": null, 00:15:44.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.243 "is_configured": false, 00:15:44.243 "data_offset": 2048, 00:15:44.243 "data_size": 63488 00:15:44.243 }, 00:15:44.243 { 00:15:44.243 "name": "pt2", 00:15:44.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.243 "is_configured": true, 00:15:44.243 "data_offset": 2048, 00:15:44.243 "data_size": 63488 00:15:44.243 }, 00:15:44.243 { 00:15:44.243 "name": "pt3", 00:15:44.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.243 "is_configured": true, 00:15:44.243 "data_offset": 2048, 00:15:44.243 "data_size": 63488 00:15:44.243 } 00:15:44.243 ] 00:15:44.243 }' 00:15:44.243 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.243 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.503 [2024-11-17 01:35:52.828752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.503 [2024-11-17 01:35:52.828841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.503 [2024-11-17 01:35:52.828961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.503 [2024-11-17 01:35:52.829059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.503 [2024-11-17 01:35:52.829118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.503 [2024-11-17 01:35:52.900672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:44.503 [2024-11-17 01:35:52.900776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.503 [2024-11-17 01:35:52.900815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:44.503 [2024-11-17 01:35:52.900851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.503 [2024-11-17 01:35:52.903085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.503 [2024-11-17 01:35:52.903181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:44.503 [2024-11-17 01:35:52.903297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:44.503 [2024-11-17 01:35:52.903378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:44.503 [2024-11-17 01:35:52.903552] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:44.503 [2024-11-17 01:35:52.903622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.503 [2024-11-17 01:35:52.903666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:44.503 [2024-11-17 01:35:52.903785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.503 pt1 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.503 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.763 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.763 "name": "raid_bdev1", 00:15:44.763 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:44.763 "strip_size_kb": 64, 00:15:44.763 "state": "configuring", 00:15:44.763 "raid_level": "raid5f", 00:15:44.763 "superblock": true, 00:15:44.763 "num_base_bdevs": 3, 00:15:44.763 "num_base_bdevs_discovered": 1, 00:15:44.763 "num_base_bdevs_operational": 2, 00:15:44.763 "base_bdevs_list": [ 00:15:44.763 { 00:15:44.763 "name": null, 00:15:44.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.763 "is_configured": false, 00:15:44.763 "data_offset": 2048, 00:15:44.763 "data_size": 63488 00:15:44.763 }, 00:15:44.763 { 00:15:44.763 "name": "pt2", 00:15:44.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.763 "is_configured": true, 00:15:44.763 "data_offset": 2048, 00:15:44.763 "data_size": 63488 00:15:44.763 }, 00:15:44.763 { 00:15:44.763 "name": null, 00:15:44.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.763 "is_configured": false, 00:15:44.763 "data_offset": 2048, 00:15:44.763 "data_size": 63488 00:15:44.763 } 00:15:44.763 ] 00:15:44.763 }' 00:15:44.763 01:35:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.763 01:35:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.023 [2024-11-17 01:35:53.403792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.023 [2024-11-17 01:35:53.403898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.023 [2024-11-17 01:35:53.403941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:45.023 [2024-11-17 01:35:53.403979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.023 [2024-11-17 01:35:53.404480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.023 [2024-11-17 01:35:53.404545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.023 [2024-11-17 01:35:53.404675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:45.023 [2024-11-17 01:35:53.404705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.023 [2024-11-17 01:35:53.404854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:45.023 [2024-11-17 01:35:53.404863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:45.023 [2024-11-17 01:35:53.405101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:45.023 pt3 00:15:45.023 [2024-11-17 01:35:53.410682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:45.023 [2024-11-17 01:35:53.410709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:45.023 [2024-11-17 01:35:53.410965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.023 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.024 "name": "raid_bdev1", 00:15:45.024 "uuid": "63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a", 00:15:45.024 "strip_size_kb": 64, 00:15:45.024 "state": "online", 00:15:45.024 "raid_level": "raid5f", 00:15:45.024 "superblock": true, 00:15:45.024 "num_base_bdevs": 3, 00:15:45.024 "num_base_bdevs_discovered": 2, 00:15:45.024 "num_base_bdevs_operational": 2, 00:15:45.024 "base_bdevs_list": [ 00:15:45.024 { 00:15:45.024 "name": null, 00:15:45.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.024 "is_configured": false, 00:15:45.024 "data_offset": 2048, 00:15:45.024 "data_size": 63488 00:15:45.024 }, 00:15:45.024 { 00:15:45.024 "name": "pt2", 00:15:45.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.024 "is_configured": true, 00:15:45.024 "data_offset": 2048, 00:15:45.024 "data_size": 63488 00:15:45.024 }, 00:15:45.024 { 00:15:45.024 "name": "pt3", 00:15:45.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.024 "is_configured": true, 00:15:45.024 "data_offset": 2048, 00:15:45.024 "data_size": 63488 00:15:45.024 } 00:15:45.024 ] 00:15:45.024 }' 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.024 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.592 [2024-11-17 01:35:53.920747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a '!=' 63cec5b6-4dbb-4ab4-bb7e-0b6c76a5134a ']' 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80874 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80874 ']' 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80874 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80874 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.592 killing process with pid 80874 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80874' 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80874 00:15:45.592 [2024-11-17 01:35:53.978437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.592 [2024-11-17 01:35:53.978520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.592 [2024-11-17 01:35:53.978574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.592 [2024-11-17 01:35:53.978586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:45.592 01:35:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80874 00:15:45.852 [2024-11-17 01:35:54.267092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.233 01:35:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:47.233 00:15:47.233 real 0m7.648s 00:15:47.233 user 0m11.942s 00:15:47.233 sys 0m1.439s 00:15:47.233 ************************************ 00:15:47.233 END TEST raid5f_superblock_test 00:15:47.233 ************************************ 00:15:47.233 01:35:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.233 01:35:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.233 01:35:55 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:47.233 01:35:55 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:47.233 01:35:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:47.233 01:35:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.233 01:35:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.233 ************************************ 00:15:47.233 START TEST raid5f_rebuild_test 00:15:47.233 ************************************ 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:47.233 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81308 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81308 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81308 ']' 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.234 01:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.234 [2024-11-17 01:35:55.504060] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:47.234 [2024-11-17 01:35:55.504241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:47.234 Zero copy mechanism will not be used. 00:15:47.234 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81308 ] 00:15:47.234 [2024-11-17 01:35:55.683280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.494 [2024-11-17 01:35:55.789602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.753 [2024-11-17 01:35:55.973301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.753 [2024-11-17 01:35:55.973438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.122 BaseBdev1_malloc 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.122 [2024-11-17 01:35:56.358000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.122 [2024-11-17 01:35:56.358127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.122 [2024-11-17 01:35:56.358172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.122 [2024-11-17 01:35:56.358185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.122 [2024-11-17 01:35:56.360263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.122 [2024-11-17 01:35:56.360307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.122 BaseBdev1 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.122 BaseBdev2_malloc 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.122 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.122 [2024-11-17 01:35:56.411985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:48.122 [2024-11-17 01:35:56.412100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.122 [2024-11-17 01:35:56.412142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.122 [2024-11-17 01:35:56.412185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.122 [2024-11-17 01:35:56.414331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.122 [2024-11-17 01:35:56.414423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.122 BaseBdev2 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.123 BaseBdev3_malloc 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.123 [2024-11-17 01:35:56.478103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:48.123 [2024-11-17 01:35:56.478221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.123 [2024-11-17 01:35:56.478261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.123 [2024-11-17 01:35:56.478300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.123 [2024-11-17 01:35:56.480391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.123 [2024-11-17 01:35:56.480470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:48.123 BaseBdev3 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.123 spare_malloc 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.123 spare_delay 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.123 [2024-11-17 01:35:56.544586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.123 [2024-11-17 01:35:56.544702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.123 [2024-11-17 01:35:56.544739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:48.123 [2024-11-17 01:35:56.544793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.123 [2024-11-17 01:35:56.547021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.123 [2024-11-17 01:35:56.547114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.123 spare 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.123 [2024-11-17 01:35:56.556627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.123 [2024-11-17 01:35:56.558396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.123 [2024-11-17 01:35:56.558506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.123 [2024-11-17 01:35:56.558638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:48.123 [2024-11-17 01:35:56.558685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:48.123 [2024-11-17 01:35:56.558975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:48.123 [2024-11-17 01:35:56.564850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:48.123 [2024-11-17 01:35:56.564910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:48.123 [2024-11-17 01:35:56.565141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.123 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.420 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.420 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.420 "name": "raid_bdev1", 00:15:48.420 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:48.420 "strip_size_kb": 64, 00:15:48.420 "state": "online", 00:15:48.420 "raid_level": "raid5f", 00:15:48.420 "superblock": false, 00:15:48.420 "num_base_bdevs": 3, 00:15:48.420 "num_base_bdevs_discovered": 3, 00:15:48.420 "num_base_bdevs_operational": 3, 00:15:48.420 "base_bdevs_list": [ 00:15:48.420 { 00:15:48.420 "name": "BaseBdev1", 00:15:48.420 "uuid": "d7f823ca-7c6f-58fb-9ed2-30aedd661ea7", 00:15:48.420 "is_configured": true, 00:15:48.420 "data_offset": 0, 00:15:48.420 "data_size": 65536 00:15:48.420 }, 00:15:48.420 { 00:15:48.420 "name": "BaseBdev2", 00:15:48.420 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:48.420 "is_configured": true, 00:15:48.420 "data_offset": 0, 00:15:48.420 "data_size": 65536 00:15:48.420 }, 00:15:48.420 { 00:15:48.420 "name": "BaseBdev3", 00:15:48.420 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:48.420 "is_configured": true, 00:15:48.420 "data_offset": 0, 00:15:48.420 "data_size": 65536 00:15:48.420 } 00:15:48.420 ] 00:15:48.420 }' 00:15:48.420 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.420 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.680 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.680 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.680 01:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.680 01:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:48.680 [2024-11-17 01:35:57.002682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:48.680 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:48.940 [2024-11-17 01:35:57.254111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:48.940 /dev/nbd0 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.940 1+0 records in 00:15:48.940 1+0 records out 00:15:48.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192932 s, 21.2 MB/s 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:48.940 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:48.941 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:49.510 512+0 records in 00:15:49.510 512+0 records out 00:15:49.510 67108864 bytes (67 MB, 64 MiB) copied, 0.509215 s, 132 MB/s 00:15:49.510 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:49.510 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.510 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:49.510 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.510 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:49.510 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.510 01:35:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.770 [2024-11-17 01:35:58.042996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.770 [2024-11-17 01:35:58.053975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.770 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.770 "name": "raid_bdev1", 00:15:49.770 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:49.771 "strip_size_kb": 64, 00:15:49.771 "state": "online", 00:15:49.771 "raid_level": "raid5f", 00:15:49.771 "superblock": false, 00:15:49.771 "num_base_bdevs": 3, 00:15:49.771 "num_base_bdevs_discovered": 2, 00:15:49.771 "num_base_bdevs_operational": 2, 00:15:49.771 "base_bdevs_list": [ 00:15:49.771 { 00:15:49.771 "name": null, 00:15:49.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.771 "is_configured": false, 00:15:49.771 "data_offset": 0, 00:15:49.771 "data_size": 65536 00:15:49.771 }, 00:15:49.771 { 00:15:49.771 "name": "BaseBdev2", 00:15:49.771 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:49.771 "is_configured": true, 00:15:49.771 "data_offset": 0, 00:15:49.771 "data_size": 65536 00:15:49.771 }, 00:15:49.771 { 00:15:49.771 "name": "BaseBdev3", 00:15:49.771 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:49.771 "is_configured": true, 00:15:49.771 "data_offset": 0, 00:15:49.771 "data_size": 65536 00:15:49.771 } 00:15:49.771 ] 00:15:49.771 }' 00:15:49.771 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.771 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.341 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:50.341 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.341 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.341 [2024-11-17 01:35:58.541139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.341 [2024-11-17 01:35:58.558120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:50.341 01:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.341 01:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:50.341 [2024-11-17 01:35:58.566509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.278 "name": "raid_bdev1", 00:15:51.278 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:51.278 "strip_size_kb": 64, 00:15:51.278 "state": "online", 00:15:51.278 "raid_level": "raid5f", 00:15:51.278 "superblock": false, 00:15:51.278 "num_base_bdevs": 3, 00:15:51.278 "num_base_bdevs_discovered": 3, 00:15:51.278 "num_base_bdevs_operational": 3, 00:15:51.278 "process": { 00:15:51.278 "type": "rebuild", 00:15:51.278 "target": "spare", 00:15:51.278 "progress": { 00:15:51.278 "blocks": 20480, 00:15:51.278 "percent": 15 00:15:51.278 } 00:15:51.278 }, 00:15:51.278 "base_bdevs_list": [ 00:15:51.278 { 00:15:51.278 "name": "spare", 00:15:51.278 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:51.278 "is_configured": true, 00:15:51.278 "data_offset": 0, 00:15:51.278 "data_size": 65536 00:15:51.278 }, 00:15:51.278 { 00:15:51.278 "name": "BaseBdev2", 00:15:51.278 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:51.278 "is_configured": true, 00:15:51.278 "data_offset": 0, 00:15:51.278 "data_size": 65536 00:15:51.278 }, 00:15:51.278 { 00:15:51.278 "name": "BaseBdev3", 00:15:51.278 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:51.278 "is_configured": true, 00:15:51.278 "data_offset": 0, 00:15:51.278 "data_size": 65536 00:15:51.278 } 00:15:51.278 ] 00:15:51.278 }' 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.278 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.278 [2024-11-17 01:35:59.717089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.538 [2024-11-17 01:35:59.775564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.538 [2024-11-17 01:35:59.775670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.538 [2024-11-17 01:35:59.775711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.538 [2024-11-17 01:35:59.775733] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.538 "name": "raid_bdev1", 00:15:51.538 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:51.538 "strip_size_kb": 64, 00:15:51.538 "state": "online", 00:15:51.538 "raid_level": "raid5f", 00:15:51.538 "superblock": false, 00:15:51.538 "num_base_bdevs": 3, 00:15:51.538 "num_base_bdevs_discovered": 2, 00:15:51.538 "num_base_bdevs_operational": 2, 00:15:51.538 "base_bdevs_list": [ 00:15:51.538 { 00:15:51.538 "name": null, 00:15:51.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.538 "is_configured": false, 00:15:51.538 "data_offset": 0, 00:15:51.538 "data_size": 65536 00:15:51.538 }, 00:15:51.538 { 00:15:51.538 "name": "BaseBdev2", 00:15:51.538 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:51.538 "is_configured": true, 00:15:51.538 "data_offset": 0, 00:15:51.538 "data_size": 65536 00:15:51.538 }, 00:15:51.538 { 00:15:51.538 "name": "BaseBdev3", 00:15:51.538 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:51.538 "is_configured": true, 00:15:51.538 "data_offset": 0, 00:15:51.538 "data_size": 65536 00:15:51.538 } 00:15:51.538 ] 00:15:51.538 }' 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.538 01:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.108 "name": "raid_bdev1", 00:15:52.108 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:52.108 "strip_size_kb": 64, 00:15:52.108 "state": "online", 00:15:52.108 "raid_level": "raid5f", 00:15:52.108 "superblock": false, 00:15:52.108 "num_base_bdevs": 3, 00:15:52.108 "num_base_bdevs_discovered": 2, 00:15:52.108 "num_base_bdevs_operational": 2, 00:15:52.108 "base_bdevs_list": [ 00:15:52.108 { 00:15:52.108 "name": null, 00:15:52.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.108 "is_configured": false, 00:15:52.108 "data_offset": 0, 00:15:52.108 "data_size": 65536 00:15:52.108 }, 00:15:52.108 { 00:15:52.108 "name": "BaseBdev2", 00:15:52.108 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:52.108 "is_configured": true, 00:15:52.108 "data_offset": 0, 00:15:52.108 "data_size": 65536 00:15:52.108 }, 00:15:52.108 { 00:15:52.108 "name": "BaseBdev3", 00:15:52.108 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:52.108 "is_configured": true, 00:15:52.108 "data_offset": 0, 00:15:52.108 "data_size": 65536 00:15:52.108 } 00:15:52.108 ] 00:15:52.108 }' 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 [2024-11-17 01:36:00.459750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.108 [2024-11-17 01:36:00.475355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.108 01:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:52.108 [2024-11-17 01:36:00.482930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.046 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.305 01:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.305 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.305 "name": "raid_bdev1", 00:15:53.305 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:53.306 "strip_size_kb": 64, 00:15:53.306 "state": "online", 00:15:53.306 "raid_level": "raid5f", 00:15:53.306 "superblock": false, 00:15:53.306 "num_base_bdevs": 3, 00:15:53.306 "num_base_bdevs_discovered": 3, 00:15:53.306 "num_base_bdevs_operational": 3, 00:15:53.306 "process": { 00:15:53.306 "type": "rebuild", 00:15:53.306 "target": "spare", 00:15:53.306 "progress": { 00:15:53.306 "blocks": 20480, 00:15:53.306 "percent": 15 00:15:53.306 } 00:15:53.306 }, 00:15:53.306 "base_bdevs_list": [ 00:15:53.306 { 00:15:53.306 "name": "spare", 00:15:53.306 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 0, 00:15:53.306 "data_size": 65536 00:15:53.306 }, 00:15:53.306 { 00:15:53.306 "name": "BaseBdev2", 00:15:53.306 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 0, 00:15:53.306 "data_size": 65536 00:15:53.306 }, 00:15:53.306 { 00:15:53.306 "name": "BaseBdev3", 00:15:53.306 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 0, 00:15:53.306 "data_size": 65536 00:15:53.306 } 00:15:53.306 ] 00:15:53.306 }' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.306 "name": "raid_bdev1", 00:15:53.306 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:53.306 "strip_size_kb": 64, 00:15:53.306 "state": "online", 00:15:53.306 "raid_level": "raid5f", 00:15:53.306 "superblock": false, 00:15:53.306 "num_base_bdevs": 3, 00:15:53.306 "num_base_bdevs_discovered": 3, 00:15:53.306 "num_base_bdevs_operational": 3, 00:15:53.306 "process": { 00:15:53.306 "type": "rebuild", 00:15:53.306 "target": "spare", 00:15:53.306 "progress": { 00:15:53.306 "blocks": 22528, 00:15:53.306 "percent": 17 00:15:53.306 } 00:15:53.306 }, 00:15:53.306 "base_bdevs_list": [ 00:15:53.306 { 00:15:53.306 "name": "spare", 00:15:53.306 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 0, 00:15:53.306 "data_size": 65536 00:15:53.306 }, 00:15:53.306 { 00:15:53.306 "name": "BaseBdev2", 00:15:53.306 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 0, 00:15:53.306 "data_size": 65536 00:15:53.306 }, 00:15:53.306 { 00:15:53.306 "name": "BaseBdev3", 00:15:53.306 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:53.306 "is_configured": true, 00:15:53.306 "data_offset": 0, 00:15:53.306 "data_size": 65536 00:15:53.306 } 00:15:53.306 ] 00:15:53.306 }' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.306 01:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.686 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.686 "name": "raid_bdev1", 00:15:54.686 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:54.686 "strip_size_kb": 64, 00:15:54.686 "state": "online", 00:15:54.686 "raid_level": "raid5f", 00:15:54.686 "superblock": false, 00:15:54.686 "num_base_bdevs": 3, 00:15:54.686 "num_base_bdevs_discovered": 3, 00:15:54.686 "num_base_bdevs_operational": 3, 00:15:54.686 "process": { 00:15:54.686 "type": "rebuild", 00:15:54.686 "target": "spare", 00:15:54.686 "progress": { 00:15:54.686 "blocks": 45056, 00:15:54.686 "percent": 34 00:15:54.686 } 00:15:54.686 }, 00:15:54.687 "base_bdevs_list": [ 00:15:54.687 { 00:15:54.687 "name": "spare", 00:15:54.687 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:54.687 "is_configured": true, 00:15:54.687 "data_offset": 0, 00:15:54.687 "data_size": 65536 00:15:54.687 }, 00:15:54.687 { 00:15:54.687 "name": "BaseBdev2", 00:15:54.687 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:54.687 "is_configured": true, 00:15:54.687 "data_offset": 0, 00:15:54.687 "data_size": 65536 00:15:54.687 }, 00:15:54.687 { 00:15:54.687 "name": "BaseBdev3", 00:15:54.687 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:54.687 "is_configured": true, 00:15:54.687 "data_offset": 0, 00:15:54.687 "data_size": 65536 00:15:54.687 } 00:15:54.687 ] 00:15:54.687 }' 00:15:54.687 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.687 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.687 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.687 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.687 01:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.626 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.626 "name": "raid_bdev1", 00:15:55.626 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:55.626 "strip_size_kb": 64, 00:15:55.626 "state": "online", 00:15:55.626 "raid_level": "raid5f", 00:15:55.626 "superblock": false, 00:15:55.626 "num_base_bdevs": 3, 00:15:55.626 "num_base_bdevs_discovered": 3, 00:15:55.626 "num_base_bdevs_operational": 3, 00:15:55.626 "process": { 00:15:55.626 "type": "rebuild", 00:15:55.626 "target": "spare", 00:15:55.626 "progress": { 00:15:55.626 "blocks": 69632, 00:15:55.626 "percent": 53 00:15:55.626 } 00:15:55.626 }, 00:15:55.626 "base_bdevs_list": [ 00:15:55.626 { 00:15:55.626 "name": "spare", 00:15:55.626 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:55.627 "is_configured": true, 00:15:55.627 "data_offset": 0, 00:15:55.627 "data_size": 65536 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "name": "BaseBdev2", 00:15:55.627 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:55.627 "is_configured": true, 00:15:55.627 "data_offset": 0, 00:15:55.627 "data_size": 65536 00:15:55.627 }, 00:15:55.627 { 00:15:55.627 "name": "BaseBdev3", 00:15:55.627 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:55.627 "is_configured": true, 00:15:55.627 "data_offset": 0, 00:15:55.627 "data_size": 65536 00:15:55.627 } 00:15:55.627 ] 00:15:55.627 }' 00:15:55.627 01:36:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.627 01:36:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.627 01:36:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.627 01:36:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.627 01:36:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.012 "name": "raid_bdev1", 00:15:57.012 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:57.012 "strip_size_kb": 64, 00:15:57.012 "state": "online", 00:15:57.012 "raid_level": "raid5f", 00:15:57.012 "superblock": false, 00:15:57.012 "num_base_bdevs": 3, 00:15:57.012 "num_base_bdevs_discovered": 3, 00:15:57.012 "num_base_bdevs_operational": 3, 00:15:57.012 "process": { 00:15:57.012 "type": "rebuild", 00:15:57.012 "target": "spare", 00:15:57.012 "progress": { 00:15:57.012 "blocks": 92160, 00:15:57.012 "percent": 70 00:15:57.012 } 00:15:57.012 }, 00:15:57.012 "base_bdevs_list": [ 00:15:57.012 { 00:15:57.012 "name": "spare", 00:15:57.012 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:57.012 "is_configured": true, 00:15:57.012 "data_offset": 0, 00:15:57.012 "data_size": 65536 00:15:57.012 }, 00:15:57.012 { 00:15:57.012 "name": "BaseBdev2", 00:15:57.012 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:57.012 "is_configured": true, 00:15:57.012 "data_offset": 0, 00:15:57.012 "data_size": 65536 00:15:57.012 }, 00:15:57.012 { 00:15:57.012 "name": "BaseBdev3", 00:15:57.012 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:57.012 "is_configured": true, 00:15:57.012 "data_offset": 0, 00:15:57.012 "data_size": 65536 00:15:57.012 } 00:15:57.012 ] 00:15:57.012 }' 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.012 01:36:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.951 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.951 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.951 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.951 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.952 "name": "raid_bdev1", 00:15:57.952 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:57.952 "strip_size_kb": 64, 00:15:57.952 "state": "online", 00:15:57.952 "raid_level": "raid5f", 00:15:57.952 "superblock": false, 00:15:57.952 "num_base_bdevs": 3, 00:15:57.952 "num_base_bdevs_discovered": 3, 00:15:57.952 "num_base_bdevs_operational": 3, 00:15:57.952 "process": { 00:15:57.952 "type": "rebuild", 00:15:57.952 "target": "spare", 00:15:57.952 "progress": { 00:15:57.952 "blocks": 116736, 00:15:57.952 "percent": 89 00:15:57.952 } 00:15:57.952 }, 00:15:57.952 "base_bdevs_list": [ 00:15:57.952 { 00:15:57.952 "name": "spare", 00:15:57.952 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:57.952 "is_configured": true, 00:15:57.952 "data_offset": 0, 00:15:57.952 "data_size": 65536 00:15:57.952 }, 00:15:57.952 { 00:15:57.952 "name": "BaseBdev2", 00:15:57.952 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:57.952 "is_configured": true, 00:15:57.952 "data_offset": 0, 00:15:57.952 "data_size": 65536 00:15:57.952 }, 00:15:57.952 { 00:15:57.952 "name": "BaseBdev3", 00:15:57.952 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:57.952 "is_configured": true, 00:15:57.952 "data_offset": 0, 00:15:57.952 "data_size": 65536 00:15:57.952 } 00:15:57.952 ] 00:15:57.952 }' 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.952 01:36:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.521 [2024-11-17 01:36:06.926333] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:58.521 [2024-11-17 01:36:06.926414] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:58.521 [2024-11-17 01:36:06.926457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.091 "name": "raid_bdev1", 00:15:59.091 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:59.091 "strip_size_kb": 64, 00:15:59.091 "state": "online", 00:15:59.091 "raid_level": "raid5f", 00:15:59.091 "superblock": false, 00:15:59.091 "num_base_bdevs": 3, 00:15:59.091 "num_base_bdevs_discovered": 3, 00:15:59.091 "num_base_bdevs_operational": 3, 00:15:59.091 "base_bdevs_list": [ 00:15:59.091 { 00:15:59.091 "name": "spare", 00:15:59.091 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:59.091 "is_configured": true, 00:15:59.091 "data_offset": 0, 00:15:59.091 "data_size": 65536 00:15:59.091 }, 00:15:59.091 { 00:15:59.091 "name": "BaseBdev2", 00:15:59.091 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:59.091 "is_configured": true, 00:15:59.091 "data_offset": 0, 00:15:59.091 "data_size": 65536 00:15:59.091 }, 00:15:59.091 { 00:15:59.091 "name": "BaseBdev3", 00:15:59.091 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:59.091 "is_configured": true, 00:15:59.091 "data_offset": 0, 00:15:59.091 "data_size": 65536 00:15:59.091 } 00:15:59.091 ] 00:15:59.091 }' 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.091 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.351 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.351 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.351 "name": "raid_bdev1", 00:15:59.351 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:59.351 "strip_size_kb": 64, 00:15:59.351 "state": "online", 00:15:59.351 "raid_level": "raid5f", 00:15:59.351 "superblock": false, 00:15:59.351 "num_base_bdevs": 3, 00:15:59.351 "num_base_bdevs_discovered": 3, 00:15:59.352 "num_base_bdevs_operational": 3, 00:15:59.352 "base_bdevs_list": [ 00:15:59.352 { 00:15:59.352 "name": "spare", 00:15:59.352 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:59.352 "is_configured": true, 00:15:59.352 "data_offset": 0, 00:15:59.352 "data_size": 65536 00:15:59.352 }, 00:15:59.352 { 00:15:59.352 "name": "BaseBdev2", 00:15:59.352 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:59.352 "is_configured": true, 00:15:59.352 "data_offset": 0, 00:15:59.352 "data_size": 65536 00:15:59.352 }, 00:15:59.352 { 00:15:59.352 "name": "BaseBdev3", 00:15:59.352 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:59.352 "is_configured": true, 00:15:59.352 "data_offset": 0, 00:15:59.352 "data_size": 65536 00:15:59.352 } 00:15:59.352 ] 00:15:59.352 }' 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.352 "name": "raid_bdev1", 00:15:59.352 "uuid": "fe93fe76-06d9-4d1d-94ce-ed9f24251178", 00:15:59.352 "strip_size_kb": 64, 00:15:59.352 "state": "online", 00:15:59.352 "raid_level": "raid5f", 00:15:59.352 "superblock": false, 00:15:59.352 "num_base_bdevs": 3, 00:15:59.352 "num_base_bdevs_discovered": 3, 00:15:59.352 "num_base_bdevs_operational": 3, 00:15:59.352 "base_bdevs_list": [ 00:15:59.352 { 00:15:59.352 "name": "spare", 00:15:59.352 "uuid": "308abbbf-04f4-5744-b4f4-9d71987cc50a", 00:15:59.352 "is_configured": true, 00:15:59.352 "data_offset": 0, 00:15:59.352 "data_size": 65536 00:15:59.352 }, 00:15:59.352 { 00:15:59.352 "name": "BaseBdev2", 00:15:59.352 "uuid": "9880ee9d-36f5-5c5c-beeb-60ff0bc28b55", 00:15:59.352 "is_configured": true, 00:15:59.352 "data_offset": 0, 00:15:59.352 "data_size": 65536 00:15:59.352 }, 00:15:59.352 { 00:15:59.352 "name": "BaseBdev3", 00:15:59.352 "uuid": "1594ad51-11ba-5b8b-883d-c944201d512b", 00:15:59.352 "is_configured": true, 00:15:59.352 "data_offset": 0, 00:15:59.352 "data_size": 65536 00:15:59.352 } 00:15:59.352 ] 00:15:59.352 }' 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.352 01:36:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.923 [2024-11-17 01:36:08.140639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.923 [2024-11-17 01:36:08.140723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.923 [2024-11-17 01:36:08.140851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.923 [2024-11-17 01:36:08.140973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.923 [2024-11-17 01:36:08.141057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:59.923 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:59.923 /dev/nbd0 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.184 1+0 records in 00:16:00.184 1+0 records out 00:16:00.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044758 s, 9.2 MB/s 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.184 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:00.184 /dev/nbd1 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.445 1+0 records in 00:16:00.445 1+0 records out 00:16:00.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047382 s, 8.6 MB/s 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.445 01:36:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.705 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81308 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81308 ']' 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81308 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81308 00:16:00.964 killing process with pid 81308 00:16:00.964 Received shutdown signal, test time was about 60.000000 seconds 00:16:00.964 00:16:00.964 Latency(us) 00:16:00.964 [2024-11-17T01:36:09.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.964 [2024-11-17T01:36:09.424Z] =================================================================================================================== 00:16:00.964 [2024-11-17T01:36:09.424Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81308' 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81308 00:16:00.964 [2024-11-17 01:36:09.321825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.964 01:36:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81308 00:16:01.534 [2024-11-17 01:36:09.730755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.473 01:36:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:02.473 00:16:02.473 real 0m15.467s 00:16:02.473 user 0m18.892s 00:16:02.473 sys 0m2.269s 00:16:02.473 01:36:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.473 ************************************ 00:16:02.473 END TEST raid5f_rebuild_test 00:16:02.473 ************************************ 00:16:02.473 01:36:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.473 01:36:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:02.473 01:36:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:02.473 01:36:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.473 01:36:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.734 ************************************ 00:16:02.734 START TEST raid5f_rebuild_test_sb 00:16:02.734 ************************************ 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81748 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81748 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81748 ']' 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.734 01:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.734 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:02.734 Zero copy mechanism will not be used. 00:16:02.734 [2024-11-17 01:36:11.047235] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:02.734 [2024-11-17 01:36:11.047345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81748 ] 00:16:02.994 [2024-11-17 01:36:11.222719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.994 [2024-11-17 01:36:11.356348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.254 [2024-11-17 01:36:11.589565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.254 [2024-11-17 01:36:11.589632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.513 BaseBdev1_malloc 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.513 [2024-11-17 01:36:11.913135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.513 [2024-11-17 01:36:11.913284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.513 [2024-11-17 01:36:11.913328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:03.513 [2024-11-17 01:36:11.913360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.513 [2024-11-17 01:36:11.915684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.513 [2024-11-17 01:36:11.915774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.513 BaseBdev1 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.513 BaseBdev2_malloc 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.513 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.773 [2024-11-17 01:36:11.973878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:03.773 [2024-11-17 01:36:11.973937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.773 [2024-11-17 01:36:11.973955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:03.773 [2024-11-17 01:36:11.973969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.773 [2024-11-17 01:36:11.976287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.773 [2024-11-17 01:36:11.976325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.773 BaseBdev2 00:16:03.773 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.773 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.773 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:03.773 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.773 01:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.773 BaseBdev3_malloc 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.773 [2024-11-17 01:36:12.070608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:03.773 [2024-11-17 01:36:12.070726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.773 [2024-11-17 01:36:12.070773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:03.773 [2024-11-17 01:36:12.070805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.773 [2024-11-17 01:36:12.073052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.773 [2024-11-17 01:36:12.073127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:03.773 BaseBdev3 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.773 spare_malloc 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.773 spare_delay 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.773 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.773 [2024-11-17 01:36:12.143015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.774 [2024-11-17 01:36:12.143113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.774 [2024-11-17 01:36:12.143152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:03.774 [2024-11-17 01:36:12.143179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.774 [2024-11-17 01:36:12.145435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.774 [2024-11-17 01:36:12.145510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.774 spare 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.774 [2024-11-17 01:36:12.155069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.774 [2024-11-17 01:36:12.157167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.774 [2024-11-17 01:36:12.157276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.774 [2024-11-17 01:36:12.157498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:03.774 [2024-11-17 01:36:12.157534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:03.774 [2024-11-17 01:36:12.157808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:03.774 [2024-11-17 01:36:12.163457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:03.774 [2024-11-17 01:36:12.163512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:03.774 [2024-11-17 01:36:12.163734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.774 "name": "raid_bdev1", 00:16:03.774 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:03.774 "strip_size_kb": 64, 00:16:03.774 "state": "online", 00:16:03.774 "raid_level": "raid5f", 00:16:03.774 "superblock": true, 00:16:03.774 "num_base_bdevs": 3, 00:16:03.774 "num_base_bdevs_discovered": 3, 00:16:03.774 "num_base_bdevs_operational": 3, 00:16:03.774 "base_bdevs_list": [ 00:16:03.774 { 00:16:03.774 "name": "BaseBdev1", 00:16:03.774 "uuid": "00539ea4-92c5-559c-9cfe-55492c498e98", 00:16:03.774 "is_configured": true, 00:16:03.774 "data_offset": 2048, 00:16:03.774 "data_size": 63488 00:16:03.774 }, 00:16:03.774 { 00:16:03.774 "name": "BaseBdev2", 00:16:03.774 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:03.774 "is_configured": true, 00:16:03.774 "data_offset": 2048, 00:16:03.774 "data_size": 63488 00:16:03.774 }, 00:16:03.774 { 00:16:03.774 "name": "BaseBdev3", 00:16:03.774 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:03.774 "is_configured": true, 00:16:03.774 "data_offset": 2048, 00:16:03.774 "data_size": 63488 00:16:03.774 } 00:16:03.774 ] 00:16:03.774 }' 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.774 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.343 [2024-11-17 01:36:12.637839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.343 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:04.344 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.344 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.344 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:04.604 [2024-11-17 01:36:12.889245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:04.604 /dev/nbd0 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.604 1+0 records in 00:16:04.604 1+0 records out 00:16:04.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399505 s, 10.3 MB/s 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:04.604 01:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:05.174 496+0 records in 00:16:05.174 496+0 records out 00:16:05.174 65011712 bytes (65 MB, 62 MiB) copied, 0.373136 s, 174 MB/s 00:16:05.174 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.174 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.174 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.174 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.174 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.175 [2024-11-17 01:36:13.536668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.175 [2024-11-17 01:36:13.556189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.175 "name": "raid_bdev1", 00:16:05.175 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:05.175 "strip_size_kb": 64, 00:16:05.175 "state": "online", 00:16:05.175 "raid_level": "raid5f", 00:16:05.175 "superblock": true, 00:16:05.175 "num_base_bdevs": 3, 00:16:05.175 "num_base_bdevs_discovered": 2, 00:16:05.175 "num_base_bdevs_operational": 2, 00:16:05.175 "base_bdevs_list": [ 00:16:05.175 { 00:16:05.175 "name": null, 00:16:05.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.175 "is_configured": false, 00:16:05.175 "data_offset": 0, 00:16:05.175 "data_size": 63488 00:16:05.175 }, 00:16:05.175 { 00:16:05.175 "name": "BaseBdev2", 00:16:05.175 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:05.175 "is_configured": true, 00:16:05.175 "data_offset": 2048, 00:16:05.175 "data_size": 63488 00:16:05.175 }, 00:16:05.175 { 00:16:05.175 "name": "BaseBdev3", 00:16:05.175 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:05.175 "is_configured": true, 00:16:05.175 "data_offset": 2048, 00:16:05.175 "data_size": 63488 00:16:05.175 } 00:16:05.175 ] 00:16:05.175 }' 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.175 01:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.744 01:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.744 01:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.744 01:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.744 [2024-11-17 01:36:14.047327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.744 [2024-11-17 01:36:14.063535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:05.744 01:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.744 01:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.744 [2024-11-17 01:36:14.070833] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.682 "name": "raid_bdev1", 00:16:06.682 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:06.682 "strip_size_kb": 64, 00:16:06.682 "state": "online", 00:16:06.682 "raid_level": "raid5f", 00:16:06.682 "superblock": true, 00:16:06.682 "num_base_bdevs": 3, 00:16:06.682 "num_base_bdevs_discovered": 3, 00:16:06.682 "num_base_bdevs_operational": 3, 00:16:06.682 "process": { 00:16:06.682 "type": "rebuild", 00:16:06.682 "target": "spare", 00:16:06.682 "progress": { 00:16:06.682 "blocks": 20480, 00:16:06.682 "percent": 16 00:16:06.682 } 00:16:06.682 }, 00:16:06.682 "base_bdevs_list": [ 00:16:06.682 { 00:16:06.682 "name": "spare", 00:16:06.682 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:06.682 "is_configured": true, 00:16:06.682 "data_offset": 2048, 00:16:06.682 "data_size": 63488 00:16:06.682 }, 00:16:06.682 { 00:16:06.682 "name": "BaseBdev2", 00:16:06.682 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:06.682 "is_configured": true, 00:16:06.682 "data_offset": 2048, 00:16:06.682 "data_size": 63488 00:16:06.682 }, 00:16:06.682 { 00:16:06.682 "name": "BaseBdev3", 00:16:06.682 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:06.682 "is_configured": true, 00:16:06.682 "data_offset": 2048, 00:16:06.682 "data_size": 63488 00:16:06.682 } 00:16:06.682 ] 00:16:06.682 }' 00:16:06.682 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 [2024-11-17 01:36:15.229911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.942 [2024-11-17 01:36:15.279859] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.942 [2024-11-17 01:36:15.279920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.942 [2024-11-17 01:36:15.279941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.942 [2024-11-17 01:36:15.279949] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.942 "name": "raid_bdev1", 00:16:06.942 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:06.942 "strip_size_kb": 64, 00:16:06.942 "state": "online", 00:16:06.942 "raid_level": "raid5f", 00:16:06.942 "superblock": true, 00:16:06.942 "num_base_bdevs": 3, 00:16:06.942 "num_base_bdevs_discovered": 2, 00:16:06.942 "num_base_bdevs_operational": 2, 00:16:06.942 "base_bdevs_list": [ 00:16:06.942 { 00:16:06.942 "name": null, 00:16:06.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.942 "is_configured": false, 00:16:06.942 "data_offset": 0, 00:16:06.942 "data_size": 63488 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": "BaseBdev2", 00:16:06.942 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:06.942 "is_configured": true, 00:16:06.942 "data_offset": 2048, 00:16:06.942 "data_size": 63488 00:16:06.942 }, 00:16:06.942 { 00:16:06.942 "name": "BaseBdev3", 00:16:06.942 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:06.942 "is_configured": true, 00:16:06.942 "data_offset": 2048, 00:16:06.942 "data_size": 63488 00:16:06.942 } 00:16:06.942 ] 00:16:06.942 }' 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.942 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.512 "name": "raid_bdev1", 00:16:07.512 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:07.512 "strip_size_kb": 64, 00:16:07.512 "state": "online", 00:16:07.512 "raid_level": "raid5f", 00:16:07.512 "superblock": true, 00:16:07.512 "num_base_bdevs": 3, 00:16:07.512 "num_base_bdevs_discovered": 2, 00:16:07.512 "num_base_bdevs_operational": 2, 00:16:07.512 "base_bdevs_list": [ 00:16:07.512 { 00:16:07.512 "name": null, 00:16:07.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.512 "is_configured": false, 00:16:07.512 "data_offset": 0, 00:16:07.512 "data_size": 63488 00:16:07.512 }, 00:16:07.512 { 00:16:07.512 "name": "BaseBdev2", 00:16:07.512 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:07.512 "is_configured": true, 00:16:07.512 "data_offset": 2048, 00:16:07.512 "data_size": 63488 00:16:07.512 }, 00:16:07.512 { 00:16:07.512 "name": "BaseBdev3", 00:16:07.512 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:07.512 "is_configured": true, 00:16:07.512 "data_offset": 2048, 00:16:07.512 "data_size": 63488 00:16:07.512 } 00:16:07.512 ] 00:16:07.512 }' 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.512 [2024-11-17 01:36:15.900651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.512 [2024-11-17 01:36:15.916224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.512 01:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:07.512 [2024-11-17 01:36:15.923248] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.893 "name": "raid_bdev1", 00:16:08.893 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:08.893 "strip_size_kb": 64, 00:16:08.893 "state": "online", 00:16:08.893 "raid_level": "raid5f", 00:16:08.893 "superblock": true, 00:16:08.893 "num_base_bdevs": 3, 00:16:08.893 "num_base_bdevs_discovered": 3, 00:16:08.893 "num_base_bdevs_operational": 3, 00:16:08.893 "process": { 00:16:08.893 "type": "rebuild", 00:16:08.893 "target": "spare", 00:16:08.893 "progress": { 00:16:08.893 "blocks": 20480, 00:16:08.893 "percent": 16 00:16:08.893 } 00:16:08.893 }, 00:16:08.893 "base_bdevs_list": [ 00:16:08.893 { 00:16:08.893 "name": "spare", 00:16:08.893 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:08.893 "is_configured": true, 00:16:08.893 "data_offset": 2048, 00:16:08.893 "data_size": 63488 00:16:08.893 }, 00:16:08.893 { 00:16:08.893 "name": "BaseBdev2", 00:16:08.893 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:08.893 "is_configured": true, 00:16:08.893 "data_offset": 2048, 00:16:08.893 "data_size": 63488 00:16:08.893 }, 00:16:08.893 { 00:16:08.893 "name": "BaseBdev3", 00:16:08.893 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:08.893 "is_configured": true, 00:16:08.893 "data_offset": 2048, 00:16:08.893 "data_size": 63488 00:16:08.893 } 00:16:08.893 ] 00:16:08.893 }' 00:16:08.893 01:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.893 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.893 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.893 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.893 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:08.893 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:08.894 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=551 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.894 "name": "raid_bdev1", 00:16:08.894 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:08.894 "strip_size_kb": 64, 00:16:08.894 "state": "online", 00:16:08.894 "raid_level": "raid5f", 00:16:08.894 "superblock": true, 00:16:08.894 "num_base_bdevs": 3, 00:16:08.894 "num_base_bdevs_discovered": 3, 00:16:08.894 "num_base_bdevs_operational": 3, 00:16:08.894 "process": { 00:16:08.894 "type": "rebuild", 00:16:08.894 "target": "spare", 00:16:08.894 "progress": { 00:16:08.894 "blocks": 22528, 00:16:08.894 "percent": 17 00:16:08.894 } 00:16:08.894 }, 00:16:08.894 "base_bdevs_list": [ 00:16:08.894 { 00:16:08.894 "name": "spare", 00:16:08.894 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:08.894 "is_configured": true, 00:16:08.894 "data_offset": 2048, 00:16:08.894 "data_size": 63488 00:16:08.894 }, 00:16:08.894 { 00:16:08.894 "name": "BaseBdev2", 00:16:08.894 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:08.894 "is_configured": true, 00:16:08.894 "data_offset": 2048, 00:16:08.894 "data_size": 63488 00:16:08.894 }, 00:16:08.894 { 00:16:08.894 "name": "BaseBdev3", 00:16:08.894 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:08.894 "is_configured": true, 00:16:08.894 "data_offset": 2048, 00:16:08.894 "data_size": 63488 00:16:08.894 } 00:16:08.894 ] 00:16:08.894 }' 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.894 01:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.833 "name": "raid_bdev1", 00:16:09.833 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:09.833 "strip_size_kb": 64, 00:16:09.833 "state": "online", 00:16:09.833 "raid_level": "raid5f", 00:16:09.833 "superblock": true, 00:16:09.833 "num_base_bdevs": 3, 00:16:09.833 "num_base_bdevs_discovered": 3, 00:16:09.833 "num_base_bdevs_operational": 3, 00:16:09.833 "process": { 00:16:09.833 "type": "rebuild", 00:16:09.833 "target": "spare", 00:16:09.833 "progress": { 00:16:09.833 "blocks": 45056, 00:16:09.833 "percent": 35 00:16:09.833 } 00:16:09.833 }, 00:16:09.833 "base_bdevs_list": [ 00:16:09.833 { 00:16:09.833 "name": "spare", 00:16:09.833 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:09.833 "is_configured": true, 00:16:09.833 "data_offset": 2048, 00:16:09.833 "data_size": 63488 00:16:09.833 }, 00:16:09.833 { 00:16:09.833 "name": "BaseBdev2", 00:16:09.833 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:09.833 "is_configured": true, 00:16:09.833 "data_offset": 2048, 00:16:09.833 "data_size": 63488 00:16:09.833 }, 00:16:09.833 { 00:16:09.833 "name": "BaseBdev3", 00:16:09.833 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:09.833 "is_configured": true, 00:16:09.833 "data_offset": 2048, 00:16:09.833 "data_size": 63488 00:16:09.833 } 00:16:09.833 ] 00:16:09.833 }' 00:16:09.833 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.093 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.093 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.093 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.093 01:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.034 "name": "raid_bdev1", 00:16:11.034 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:11.034 "strip_size_kb": 64, 00:16:11.034 "state": "online", 00:16:11.034 "raid_level": "raid5f", 00:16:11.034 "superblock": true, 00:16:11.034 "num_base_bdevs": 3, 00:16:11.034 "num_base_bdevs_discovered": 3, 00:16:11.034 "num_base_bdevs_operational": 3, 00:16:11.034 "process": { 00:16:11.034 "type": "rebuild", 00:16:11.034 "target": "spare", 00:16:11.034 "progress": { 00:16:11.034 "blocks": 69632, 00:16:11.034 "percent": 54 00:16:11.034 } 00:16:11.034 }, 00:16:11.034 "base_bdevs_list": [ 00:16:11.034 { 00:16:11.034 "name": "spare", 00:16:11.034 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:11.034 "is_configured": true, 00:16:11.034 "data_offset": 2048, 00:16:11.034 "data_size": 63488 00:16:11.034 }, 00:16:11.034 { 00:16:11.034 "name": "BaseBdev2", 00:16:11.034 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:11.034 "is_configured": true, 00:16:11.034 "data_offset": 2048, 00:16:11.034 "data_size": 63488 00:16:11.034 }, 00:16:11.034 { 00:16:11.034 "name": "BaseBdev3", 00:16:11.034 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:11.034 "is_configured": true, 00:16:11.034 "data_offset": 2048, 00:16:11.034 "data_size": 63488 00:16:11.034 } 00:16:11.034 ] 00:16:11.034 }' 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.034 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.294 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.294 01:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.235 "name": "raid_bdev1", 00:16:12.235 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:12.235 "strip_size_kb": 64, 00:16:12.235 "state": "online", 00:16:12.235 "raid_level": "raid5f", 00:16:12.235 "superblock": true, 00:16:12.235 "num_base_bdevs": 3, 00:16:12.235 "num_base_bdevs_discovered": 3, 00:16:12.235 "num_base_bdevs_operational": 3, 00:16:12.235 "process": { 00:16:12.235 "type": "rebuild", 00:16:12.235 "target": "spare", 00:16:12.235 "progress": { 00:16:12.235 "blocks": 92160, 00:16:12.235 "percent": 72 00:16:12.235 } 00:16:12.235 }, 00:16:12.235 "base_bdevs_list": [ 00:16:12.235 { 00:16:12.235 "name": "spare", 00:16:12.235 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:12.235 "is_configured": true, 00:16:12.235 "data_offset": 2048, 00:16:12.235 "data_size": 63488 00:16:12.235 }, 00:16:12.235 { 00:16:12.235 "name": "BaseBdev2", 00:16:12.235 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:12.235 "is_configured": true, 00:16:12.235 "data_offset": 2048, 00:16:12.235 "data_size": 63488 00:16:12.235 }, 00:16:12.235 { 00:16:12.235 "name": "BaseBdev3", 00:16:12.235 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:12.235 "is_configured": true, 00:16:12.235 "data_offset": 2048, 00:16:12.235 "data_size": 63488 00:16:12.235 } 00:16:12.235 ] 00:16:12.235 }' 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.235 01:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.615 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.615 "name": "raid_bdev1", 00:16:13.615 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:13.615 "strip_size_kb": 64, 00:16:13.615 "state": "online", 00:16:13.615 "raid_level": "raid5f", 00:16:13.615 "superblock": true, 00:16:13.615 "num_base_bdevs": 3, 00:16:13.615 "num_base_bdevs_discovered": 3, 00:16:13.615 "num_base_bdevs_operational": 3, 00:16:13.615 "process": { 00:16:13.615 "type": "rebuild", 00:16:13.615 "target": "spare", 00:16:13.615 "progress": { 00:16:13.615 "blocks": 116736, 00:16:13.615 "percent": 91 00:16:13.615 } 00:16:13.615 }, 00:16:13.615 "base_bdevs_list": [ 00:16:13.615 { 00:16:13.615 "name": "spare", 00:16:13.615 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:13.615 "is_configured": true, 00:16:13.615 "data_offset": 2048, 00:16:13.615 "data_size": 63488 00:16:13.615 }, 00:16:13.615 { 00:16:13.615 "name": "BaseBdev2", 00:16:13.616 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:13.616 "is_configured": true, 00:16:13.616 "data_offset": 2048, 00:16:13.616 "data_size": 63488 00:16:13.616 }, 00:16:13.616 { 00:16:13.616 "name": "BaseBdev3", 00:16:13.616 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:13.616 "is_configured": true, 00:16:13.616 "data_offset": 2048, 00:16:13.616 "data_size": 63488 00:16:13.616 } 00:16:13.616 ] 00:16:13.616 }' 00:16:13.616 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.616 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.616 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.616 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.616 01:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.893 [2024-11-17 01:36:22.165705] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:13.893 [2024-11-17 01:36:22.165802] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:13.893 [2024-11-17 01:36:22.165907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.478 "name": "raid_bdev1", 00:16:14.478 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:14.478 "strip_size_kb": 64, 00:16:14.478 "state": "online", 00:16:14.478 "raid_level": "raid5f", 00:16:14.478 "superblock": true, 00:16:14.478 "num_base_bdevs": 3, 00:16:14.478 "num_base_bdevs_discovered": 3, 00:16:14.478 "num_base_bdevs_operational": 3, 00:16:14.478 "base_bdevs_list": [ 00:16:14.478 { 00:16:14.478 "name": "spare", 00:16:14.478 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:14.478 "is_configured": true, 00:16:14.478 "data_offset": 2048, 00:16:14.478 "data_size": 63488 00:16:14.478 }, 00:16:14.478 { 00:16:14.478 "name": "BaseBdev2", 00:16:14.478 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:14.478 "is_configured": true, 00:16:14.478 "data_offset": 2048, 00:16:14.478 "data_size": 63488 00:16:14.478 }, 00:16:14.478 { 00:16:14.478 "name": "BaseBdev3", 00:16:14.478 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:14.478 "is_configured": true, 00:16:14.478 "data_offset": 2048, 00:16:14.478 "data_size": 63488 00:16:14.478 } 00:16:14.478 ] 00:16:14.478 }' 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:14.478 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.739 01:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.739 "name": "raid_bdev1", 00:16:14.739 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:14.739 "strip_size_kb": 64, 00:16:14.739 "state": "online", 00:16:14.739 "raid_level": "raid5f", 00:16:14.739 "superblock": true, 00:16:14.739 "num_base_bdevs": 3, 00:16:14.739 "num_base_bdevs_discovered": 3, 00:16:14.739 "num_base_bdevs_operational": 3, 00:16:14.739 "base_bdevs_list": [ 00:16:14.739 { 00:16:14.739 "name": "spare", 00:16:14.739 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:14.739 "is_configured": true, 00:16:14.739 "data_offset": 2048, 00:16:14.739 "data_size": 63488 00:16:14.739 }, 00:16:14.739 { 00:16:14.739 "name": "BaseBdev2", 00:16:14.739 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:14.739 "is_configured": true, 00:16:14.739 "data_offset": 2048, 00:16:14.739 "data_size": 63488 00:16:14.739 }, 00:16:14.739 { 00:16:14.739 "name": "BaseBdev3", 00:16:14.739 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:14.739 "is_configured": true, 00:16:14.739 "data_offset": 2048, 00:16:14.739 "data_size": 63488 00:16:14.739 } 00:16:14.739 ] 00:16:14.739 }' 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.739 "name": "raid_bdev1", 00:16:14.739 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:14.739 "strip_size_kb": 64, 00:16:14.739 "state": "online", 00:16:14.739 "raid_level": "raid5f", 00:16:14.739 "superblock": true, 00:16:14.739 "num_base_bdevs": 3, 00:16:14.739 "num_base_bdevs_discovered": 3, 00:16:14.739 "num_base_bdevs_operational": 3, 00:16:14.739 "base_bdevs_list": [ 00:16:14.739 { 00:16:14.739 "name": "spare", 00:16:14.739 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:14.739 "is_configured": true, 00:16:14.739 "data_offset": 2048, 00:16:14.739 "data_size": 63488 00:16:14.739 }, 00:16:14.739 { 00:16:14.739 "name": "BaseBdev2", 00:16:14.739 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:14.739 "is_configured": true, 00:16:14.739 "data_offset": 2048, 00:16:14.739 "data_size": 63488 00:16:14.739 }, 00:16:14.739 { 00:16:14.739 "name": "BaseBdev3", 00:16:14.739 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:14.739 "is_configured": true, 00:16:14.739 "data_offset": 2048, 00:16:14.739 "data_size": 63488 00:16:14.739 } 00:16:14.739 ] 00:16:14.739 }' 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.739 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.310 [2024-11-17 01:36:23.540117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.310 [2024-11-17 01:36:23.540206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.310 [2024-11-17 01:36:23.540340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.310 [2024-11-17 01:36:23.540466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.310 [2024-11-17 01:36:23.540523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.310 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:15.569 /dev/nbd0 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.569 1+0 records in 00:16:15.569 1+0 records out 00:16:15.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356777 s, 11.5 MB/s 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.569 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:15.570 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.570 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:15.570 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:15.570 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.570 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.570 01:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:15.830 /dev/nbd1 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.830 1+0 records in 00:16:15.830 1+0 records out 00:16:15.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036231 s, 11.3 MB/s 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.830 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.089 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.090 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:16.349 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:16.349 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:16.349 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:16.349 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.350 [2024-11-17 01:36:24.759083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.350 [2024-11-17 01:36:24.759166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.350 [2024-11-17 01:36:24.759187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:16.350 [2024-11-17 01:36:24.759198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.350 [2024-11-17 01:36:24.761419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.350 [2024-11-17 01:36:24.761460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.350 [2024-11-17 01:36:24.761547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:16.350 [2024-11-17 01:36:24.761612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.350 [2024-11-17 01:36:24.761749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.350 [2024-11-17 01:36:24.761863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.350 spare 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.350 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.610 [2024-11-17 01:36:24.861747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:16.610 [2024-11-17 01:36:24.861803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:16.610 [2024-11-17 01:36:24.862081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:16.610 [2024-11-17 01:36:24.867282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:16.610 [2024-11-17 01:36:24.867306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:16.610 [2024-11-17 01:36:24.867496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.610 "name": "raid_bdev1", 00:16:16.610 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:16.610 "strip_size_kb": 64, 00:16:16.610 "state": "online", 00:16:16.610 "raid_level": "raid5f", 00:16:16.610 "superblock": true, 00:16:16.610 "num_base_bdevs": 3, 00:16:16.610 "num_base_bdevs_discovered": 3, 00:16:16.610 "num_base_bdevs_operational": 3, 00:16:16.610 "base_bdevs_list": [ 00:16:16.610 { 00:16:16.610 "name": "spare", 00:16:16.610 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:16.610 "is_configured": true, 00:16:16.610 "data_offset": 2048, 00:16:16.610 "data_size": 63488 00:16:16.610 }, 00:16:16.610 { 00:16:16.610 "name": "BaseBdev2", 00:16:16.610 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:16.610 "is_configured": true, 00:16:16.610 "data_offset": 2048, 00:16:16.610 "data_size": 63488 00:16:16.610 }, 00:16:16.610 { 00:16:16.610 "name": "BaseBdev3", 00:16:16.610 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:16.610 "is_configured": true, 00:16:16.610 "data_offset": 2048, 00:16:16.610 "data_size": 63488 00:16:16.610 } 00:16:16.610 ] 00:16:16.610 }' 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.610 01:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.870 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.130 "name": "raid_bdev1", 00:16:17.130 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:17.130 "strip_size_kb": 64, 00:16:17.130 "state": "online", 00:16:17.130 "raid_level": "raid5f", 00:16:17.130 "superblock": true, 00:16:17.130 "num_base_bdevs": 3, 00:16:17.130 "num_base_bdevs_discovered": 3, 00:16:17.130 "num_base_bdevs_operational": 3, 00:16:17.130 "base_bdevs_list": [ 00:16:17.130 { 00:16:17.130 "name": "spare", 00:16:17.130 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:17.130 "is_configured": true, 00:16:17.130 "data_offset": 2048, 00:16:17.130 "data_size": 63488 00:16:17.130 }, 00:16:17.130 { 00:16:17.130 "name": "BaseBdev2", 00:16:17.130 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:17.130 "is_configured": true, 00:16:17.130 "data_offset": 2048, 00:16:17.130 "data_size": 63488 00:16:17.130 }, 00:16:17.130 { 00:16:17.130 "name": "BaseBdev3", 00:16:17.130 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:17.130 "is_configured": true, 00:16:17.130 "data_offset": 2048, 00:16:17.130 "data_size": 63488 00:16:17.130 } 00:16:17.130 ] 00:16:17.130 }' 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.130 [2024-11-17 01:36:25.484630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.130 "name": "raid_bdev1", 00:16:17.130 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:17.130 "strip_size_kb": 64, 00:16:17.130 "state": "online", 00:16:17.130 "raid_level": "raid5f", 00:16:17.130 "superblock": true, 00:16:17.130 "num_base_bdevs": 3, 00:16:17.130 "num_base_bdevs_discovered": 2, 00:16:17.130 "num_base_bdevs_operational": 2, 00:16:17.130 "base_bdevs_list": [ 00:16:17.130 { 00:16:17.130 "name": null, 00:16:17.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.130 "is_configured": false, 00:16:17.130 "data_offset": 0, 00:16:17.130 "data_size": 63488 00:16:17.130 }, 00:16:17.130 { 00:16:17.130 "name": "BaseBdev2", 00:16:17.130 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:17.130 "is_configured": true, 00:16:17.130 "data_offset": 2048, 00:16:17.130 "data_size": 63488 00:16:17.130 }, 00:16:17.130 { 00:16:17.130 "name": "BaseBdev3", 00:16:17.130 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:17.130 "is_configured": true, 00:16:17.130 "data_offset": 2048, 00:16:17.130 "data_size": 63488 00:16:17.130 } 00:16:17.130 ] 00:16:17.130 }' 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.130 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.700 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.700 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.700 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.700 [2024-11-17 01:36:25.955893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.700 [2024-11-17 01:36:25.956137] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.700 [2024-11-17 01:36:25.956161] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:17.700 [2024-11-17 01:36:25.956202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.700 [2024-11-17 01:36:25.972513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:17.700 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.700 01:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:17.700 [2024-11-17 01:36:25.980237] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.639 01:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.639 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.639 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.639 "name": "raid_bdev1", 00:16:18.639 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:18.639 "strip_size_kb": 64, 00:16:18.639 "state": "online", 00:16:18.639 "raid_level": "raid5f", 00:16:18.639 "superblock": true, 00:16:18.639 "num_base_bdevs": 3, 00:16:18.639 "num_base_bdevs_discovered": 3, 00:16:18.639 "num_base_bdevs_operational": 3, 00:16:18.639 "process": { 00:16:18.639 "type": "rebuild", 00:16:18.639 "target": "spare", 00:16:18.639 "progress": { 00:16:18.639 "blocks": 20480, 00:16:18.639 "percent": 16 00:16:18.639 } 00:16:18.639 }, 00:16:18.639 "base_bdevs_list": [ 00:16:18.639 { 00:16:18.639 "name": "spare", 00:16:18.639 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:18.639 "is_configured": true, 00:16:18.639 "data_offset": 2048, 00:16:18.639 "data_size": 63488 00:16:18.639 }, 00:16:18.639 { 00:16:18.639 "name": "BaseBdev2", 00:16:18.639 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:18.639 "is_configured": true, 00:16:18.639 "data_offset": 2048, 00:16:18.639 "data_size": 63488 00:16:18.639 }, 00:16:18.639 { 00:16:18.639 "name": "BaseBdev3", 00:16:18.640 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:18.640 "is_configured": true, 00:16:18.640 "data_offset": 2048, 00:16:18.640 "data_size": 63488 00:16:18.640 } 00:16:18.640 ] 00:16:18.640 }' 00:16:18.640 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.640 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.640 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.899 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.899 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:18.899 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.899 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.899 [2024-11-17 01:36:27.115257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.899 [2024-11-17 01:36:27.191243] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.899 [2024-11-17 01:36:27.191309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.899 [2024-11-17 01:36:27.191326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.899 [2024-11-17 01:36:27.191336] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.900 "name": "raid_bdev1", 00:16:18.900 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:18.900 "strip_size_kb": 64, 00:16:18.900 "state": "online", 00:16:18.900 "raid_level": "raid5f", 00:16:18.900 "superblock": true, 00:16:18.900 "num_base_bdevs": 3, 00:16:18.900 "num_base_bdevs_discovered": 2, 00:16:18.900 "num_base_bdevs_operational": 2, 00:16:18.900 "base_bdevs_list": [ 00:16:18.900 { 00:16:18.900 "name": null, 00:16:18.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.900 "is_configured": false, 00:16:18.900 "data_offset": 0, 00:16:18.900 "data_size": 63488 00:16:18.900 }, 00:16:18.900 { 00:16:18.900 "name": "BaseBdev2", 00:16:18.900 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:18.900 "is_configured": true, 00:16:18.900 "data_offset": 2048, 00:16:18.900 "data_size": 63488 00:16:18.900 }, 00:16:18.900 { 00:16:18.900 "name": "BaseBdev3", 00:16:18.900 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:18.900 "is_configured": true, 00:16:18.900 "data_offset": 2048, 00:16:18.900 "data_size": 63488 00:16:18.900 } 00:16:18.900 ] 00:16:18.900 }' 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.900 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.469 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.469 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.469 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.469 [2024-11-17 01:36:27.660341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.469 [2024-11-17 01:36:27.660432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.469 [2024-11-17 01:36:27.660452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:19.469 [2024-11-17 01:36:27.660465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.469 [2024-11-17 01:36:27.660947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.469 [2024-11-17 01:36:27.660977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.469 [2024-11-17 01:36:27.661068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.469 [2024-11-17 01:36:27.661086] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:19.469 [2024-11-17 01:36:27.661095] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:19.469 [2024-11-17 01:36:27.661117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.469 [2024-11-17 01:36:27.676243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:19.469 spare 00:16:19.469 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.469 01:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:19.469 [2024-11-17 01:36:27.683458] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.409 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.409 "name": "raid_bdev1", 00:16:20.410 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:20.410 "strip_size_kb": 64, 00:16:20.410 "state": "online", 00:16:20.410 "raid_level": "raid5f", 00:16:20.410 "superblock": true, 00:16:20.410 "num_base_bdevs": 3, 00:16:20.410 "num_base_bdevs_discovered": 3, 00:16:20.410 "num_base_bdevs_operational": 3, 00:16:20.410 "process": { 00:16:20.410 "type": "rebuild", 00:16:20.410 "target": "spare", 00:16:20.410 "progress": { 00:16:20.410 "blocks": 20480, 00:16:20.410 "percent": 16 00:16:20.410 } 00:16:20.410 }, 00:16:20.410 "base_bdevs_list": [ 00:16:20.410 { 00:16:20.410 "name": "spare", 00:16:20.410 "uuid": "b0613bbb-d766-576c-befd-492cc76c3add", 00:16:20.410 "is_configured": true, 00:16:20.410 "data_offset": 2048, 00:16:20.410 "data_size": 63488 00:16:20.410 }, 00:16:20.410 { 00:16:20.410 "name": "BaseBdev2", 00:16:20.410 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:20.410 "is_configured": true, 00:16:20.410 "data_offset": 2048, 00:16:20.410 "data_size": 63488 00:16:20.410 }, 00:16:20.410 { 00:16:20.410 "name": "BaseBdev3", 00:16:20.410 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:20.410 "is_configured": true, 00:16:20.410 "data_offset": 2048, 00:16:20.410 "data_size": 63488 00:16:20.410 } 00:16:20.410 ] 00:16:20.410 }' 00:16:20.410 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.410 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.410 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.410 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.410 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:20.410 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.410 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.410 [2024-11-17 01:36:28.842611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.669 [2024-11-17 01:36:28.890735] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:20.669 [2024-11-17 01:36:28.890799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.669 [2024-11-17 01:36:28.890835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.669 [2024-11-17 01:36:28.890844] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.669 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.670 "name": "raid_bdev1", 00:16:20.670 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:20.670 "strip_size_kb": 64, 00:16:20.670 "state": "online", 00:16:20.670 "raid_level": "raid5f", 00:16:20.670 "superblock": true, 00:16:20.670 "num_base_bdevs": 3, 00:16:20.670 "num_base_bdevs_discovered": 2, 00:16:20.670 "num_base_bdevs_operational": 2, 00:16:20.670 "base_bdevs_list": [ 00:16:20.670 { 00:16:20.670 "name": null, 00:16:20.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.670 "is_configured": false, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 63488 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev2", 00:16:20.670 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 2048, 00:16:20.670 "data_size": 63488 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev3", 00:16:20.670 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 2048, 00:16:20.670 "data_size": 63488 00:16:20.670 } 00:16:20.670 ] 00:16:20.670 }' 00:16:20.670 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.670 01:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.929 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.930 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.190 "name": "raid_bdev1", 00:16:21.190 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:21.190 "strip_size_kb": 64, 00:16:21.190 "state": "online", 00:16:21.190 "raid_level": "raid5f", 00:16:21.190 "superblock": true, 00:16:21.190 "num_base_bdevs": 3, 00:16:21.190 "num_base_bdevs_discovered": 2, 00:16:21.190 "num_base_bdevs_operational": 2, 00:16:21.190 "base_bdevs_list": [ 00:16:21.190 { 00:16:21.190 "name": null, 00:16:21.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.190 "is_configured": false, 00:16:21.190 "data_offset": 0, 00:16:21.190 "data_size": 63488 00:16:21.190 }, 00:16:21.190 { 00:16:21.190 "name": "BaseBdev2", 00:16:21.190 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:21.190 "is_configured": true, 00:16:21.190 "data_offset": 2048, 00:16:21.190 "data_size": 63488 00:16:21.190 }, 00:16:21.190 { 00:16:21.190 "name": "BaseBdev3", 00:16:21.190 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:21.190 "is_configured": true, 00:16:21.190 "data_offset": 2048, 00:16:21.190 "data_size": 63488 00:16:21.190 } 00:16:21.190 ] 00:16:21.190 }' 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.190 [2024-11-17 01:36:29.527695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:21.190 [2024-11-17 01:36:29.527752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.190 [2024-11-17 01:36:29.527803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:21.190 [2024-11-17 01:36:29.527813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.190 [2024-11-17 01:36:29.528244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.190 [2024-11-17 01:36:29.528274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.190 [2024-11-17 01:36:29.528354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:21.190 [2024-11-17 01:36:29.528369] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.190 [2024-11-17 01:36:29.528388] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:21.190 [2024-11-17 01:36:29.528398] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:21.190 BaseBdev1 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.190 01:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.129 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.389 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.389 "name": "raid_bdev1", 00:16:22.389 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:22.389 "strip_size_kb": 64, 00:16:22.389 "state": "online", 00:16:22.389 "raid_level": "raid5f", 00:16:22.389 "superblock": true, 00:16:22.389 "num_base_bdevs": 3, 00:16:22.389 "num_base_bdevs_discovered": 2, 00:16:22.389 "num_base_bdevs_operational": 2, 00:16:22.389 "base_bdevs_list": [ 00:16:22.389 { 00:16:22.389 "name": null, 00:16:22.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.389 "is_configured": false, 00:16:22.389 "data_offset": 0, 00:16:22.389 "data_size": 63488 00:16:22.389 }, 00:16:22.389 { 00:16:22.389 "name": "BaseBdev2", 00:16:22.389 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:22.389 "is_configured": true, 00:16:22.389 "data_offset": 2048, 00:16:22.389 "data_size": 63488 00:16:22.389 }, 00:16:22.389 { 00:16:22.389 "name": "BaseBdev3", 00:16:22.389 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:22.389 "is_configured": true, 00:16:22.389 "data_offset": 2048, 00:16:22.389 "data_size": 63488 00:16:22.389 } 00:16:22.389 ] 00:16:22.389 }' 00:16:22.389 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.389 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.649 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.649 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.649 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.649 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.649 01:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.649 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.649 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.649 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.649 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.649 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.649 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.649 "name": "raid_bdev1", 00:16:22.649 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:22.649 "strip_size_kb": 64, 00:16:22.649 "state": "online", 00:16:22.649 "raid_level": "raid5f", 00:16:22.649 "superblock": true, 00:16:22.649 "num_base_bdevs": 3, 00:16:22.649 "num_base_bdevs_discovered": 2, 00:16:22.649 "num_base_bdevs_operational": 2, 00:16:22.649 "base_bdevs_list": [ 00:16:22.649 { 00:16:22.649 "name": null, 00:16:22.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.649 "is_configured": false, 00:16:22.649 "data_offset": 0, 00:16:22.649 "data_size": 63488 00:16:22.649 }, 00:16:22.649 { 00:16:22.649 "name": "BaseBdev2", 00:16:22.649 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:22.649 "is_configured": true, 00:16:22.649 "data_offset": 2048, 00:16:22.649 "data_size": 63488 00:16:22.649 }, 00:16:22.649 { 00:16:22.649 "name": "BaseBdev3", 00:16:22.649 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:22.649 "is_configured": true, 00:16:22.649 "data_offset": 2048, 00:16:22.649 "data_size": 63488 00:16:22.650 } 00:16:22.650 ] 00:16:22.650 }' 00:16:22.650 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.650 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.650 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.909 [2024-11-17 01:36:31.141136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.909 [2024-11-17 01:36:31.141287] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.909 [2024-11-17 01:36:31.141303] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:22.909 request: 00:16:22.909 { 00:16:22.909 "base_bdev": "BaseBdev1", 00:16:22.909 "raid_bdev": "raid_bdev1", 00:16:22.909 "method": "bdev_raid_add_base_bdev", 00:16:22.909 "req_id": 1 00:16:22.909 } 00:16:22.909 Got JSON-RPC error response 00:16:22.909 response: 00:16:22.909 { 00:16:22.909 "code": -22, 00:16:22.909 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:22.909 } 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.909 01:36:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:23.848 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:23.848 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.848 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.849 "name": "raid_bdev1", 00:16:23.849 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:23.849 "strip_size_kb": 64, 00:16:23.849 "state": "online", 00:16:23.849 "raid_level": "raid5f", 00:16:23.849 "superblock": true, 00:16:23.849 "num_base_bdevs": 3, 00:16:23.849 "num_base_bdevs_discovered": 2, 00:16:23.849 "num_base_bdevs_operational": 2, 00:16:23.849 "base_bdevs_list": [ 00:16:23.849 { 00:16:23.849 "name": null, 00:16:23.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.849 "is_configured": false, 00:16:23.849 "data_offset": 0, 00:16:23.849 "data_size": 63488 00:16:23.849 }, 00:16:23.849 { 00:16:23.849 "name": "BaseBdev2", 00:16:23.849 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:23.849 "is_configured": true, 00:16:23.849 "data_offset": 2048, 00:16:23.849 "data_size": 63488 00:16:23.849 }, 00:16:23.849 { 00:16:23.849 "name": "BaseBdev3", 00:16:23.849 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:23.849 "is_configured": true, 00:16:23.849 "data_offset": 2048, 00:16:23.849 "data_size": 63488 00:16:23.849 } 00:16:23.849 ] 00:16:23.849 }' 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.849 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.418 "name": "raid_bdev1", 00:16:24.418 "uuid": "137e573f-5f3a-4c61-aa90-be699676183e", 00:16:24.418 "strip_size_kb": 64, 00:16:24.418 "state": "online", 00:16:24.418 "raid_level": "raid5f", 00:16:24.418 "superblock": true, 00:16:24.418 "num_base_bdevs": 3, 00:16:24.418 "num_base_bdevs_discovered": 2, 00:16:24.418 "num_base_bdevs_operational": 2, 00:16:24.418 "base_bdevs_list": [ 00:16:24.418 { 00:16:24.418 "name": null, 00:16:24.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.418 "is_configured": false, 00:16:24.418 "data_offset": 0, 00:16:24.418 "data_size": 63488 00:16:24.418 }, 00:16:24.418 { 00:16:24.418 "name": "BaseBdev2", 00:16:24.418 "uuid": "d1ec8736-e787-5dfb-98e3-874128ab892a", 00:16:24.418 "is_configured": true, 00:16:24.418 "data_offset": 2048, 00:16:24.418 "data_size": 63488 00:16:24.418 }, 00:16:24.418 { 00:16:24.418 "name": "BaseBdev3", 00:16:24.418 "uuid": "c56337e5-b9d9-555a-b1a0-457d05e15a87", 00:16:24.418 "is_configured": true, 00:16:24.418 "data_offset": 2048, 00:16:24.418 "data_size": 63488 00:16:24.418 } 00:16:24.418 ] 00:16:24.418 }' 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81748 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81748 ']' 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81748 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81748 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.418 killing process with pid 81748 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81748' 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81748 00:16:24.418 Received shutdown signal, test time was about 60.000000 seconds 00:16:24.418 00:16:24.418 Latency(us) 00:16:24.418 [2024-11-17T01:36:32.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.418 [2024-11-17T01:36:32.878Z] =================================================================================================================== 00:16:24.418 [2024-11-17T01:36:32.878Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.418 [2024-11-17 01:36:32.744318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.418 [2024-11-17 01:36:32.744435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.418 01:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81748 00:16:24.418 [2024-11-17 01:36:32.744508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.418 [2024-11-17 01:36:32.744520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:24.677 [2024-11-17 01:36:33.119713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.079 01:36:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:26.079 00:16:26.079 real 0m23.198s 00:16:26.079 user 0m29.581s 00:16:26.079 sys 0m2.950s 00:16:26.079 01:36:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.079 01:36:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 ************************************ 00:16:26.079 END TEST raid5f_rebuild_test_sb 00:16:26.079 ************************************ 00:16:26.079 01:36:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:26.079 01:36:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:26.079 01:36:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:26.079 01:36:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.079 01:36:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 ************************************ 00:16:26.079 START TEST raid5f_state_function_test 00:16:26.079 ************************************ 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82503 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:26.079 Process raid pid: 82503 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82503' 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82503 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82503 ']' 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.079 01:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.079 [2024-11-17 01:36:34.336006] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:26.079 [2024-11-17 01:36:34.336144] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.079 [2024-11-17 01:36:34.512736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.337 [2024-11-17 01:36:34.629765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.595 [2024-11-17 01:36:34.843608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.595 [2024-11-17 01:36:34.843642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.854 [2024-11-17 01:36:35.139876] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.854 [2024-11-17 01:36:35.139930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.854 [2024-11-17 01:36:35.139940] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.854 [2024-11-17 01:36:35.139965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.854 [2024-11-17 01:36:35.139971] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.854 [2024-11-17 01:36:35.139979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.854 [2024-11-17 01:36:35.139985] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:26.854 [2024-11-17 01:36:35.139993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.854 "name": "Existed_Raid", 00:16:26.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.854 "strip_size_kb": 64, 00:16:26.854 "state": "configuring", 00:16:26.854 "raid_level": "raid5f", 00:16:26.854 "superblock": false, 00:16:26.854 "num_base_bdevs": 4, 00:16:26.854 "num_base_bdevs_discovered": 0, 00:16:26.854 "num_base_bdevs_operational": 4, 00:16:26.854 "base_bdevs_list": [ 00:16:26.854 { 00:16:26.854 "name": "BaseBdev1", 00:16:26.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.854 "is_configured": false, 00:16:26.854 "data_offset": 0, 00:16:26.854 "data_size": 0 00:16:26.854 }, 00:16:26.854 { 00:16:26.854 "name": "BaseBdev2", 00:16:26.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.854 "is_configured": false, 00:16:26.854 "data_offset": 0, 00:16:26.854 "data_size": 0 00:16:26.854 }, 00:16:26.854 { 00:16:26.854 "name": "BaseBdev3", 00:16:26.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.854 "is_configured": false, 00:16:26.854 "data_offset": 0, 00:16:26.854 "data_size": 0 00:16:26.854 }, 00:16:26.854 { 00:16:26.854 "name": "BaseBdev4", 00:16:26.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.854 "is_configured": false, 00:16:26.854 "data_offset": 0, 00:16:26.854 "data_size": 0 00:16:26.854 } 00:16:26.854 ] 00:16:26.854 }' 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.854 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.421 [2024-11-17 01:36:35.591065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.421 [2024-11-17 01:36:35.591100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.421 [2024-11-17 01:36:35.603052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.421 [2024-11-17 01:36:35.603113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.421 [2024-11-17 01:36:35.603122] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.421 [2024-11-17 01:36:35.603131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.421 [2024-11-17 01:36:35.603138] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.421 [2024-11-17 01:36:35.603156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.421 [2024-11-17 01:36:35.603163] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:27.421 [2024-11-17 01:36:35.603171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.421 [2024-11-17 01:36:35.650065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.421 BaseBdev1 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.421 [ 00:16:27.421 { 00:16:27.421 "name": "BaseBdev1", 00:16:27.421 "aliases": [ 00:16:27.421 "8f0a7605-f957-4546-a84a-1b3e7b3f2316" 00:16:27.421 ], 00:16:27.421 "product_name": "Malloc disk", 00:16:27.421 "block_size": 512, 00:16:27.421 "num_blocks": 65536, 00:16:27.421 "uuid": "8f0a7605-f957-4546-a84a-1b3e7b3f2316", 00:16:27.421 "assigned_rate_limits": { 00:16:27.421 "rw_ios_per_sec": 0, 00:16:27.421 "rw_mbytes_per_sec": 0, 00:16:27.421 "r_mbytes_per_sec": 0, 00:16:27.421 "w_mbytes_per_sec": 0 00:16:27.421 }, 00:16:27.421 "claimed": true, 00:16:27.421 "claim_type": "exclusive_write", 00:16:27.421 "zoned": false, 00:16:27.421 "supported_io_types": { 00:16:27.421 "read": true, 00:16:27.421 "write": true, 00:16:27.421 "unmap": true, 00:16:27.421 "flush": true, 00:16:27.421 "reset": true, 00:16:27.421 "nvme_admin": false, 00:16:27.421 "nvme_io": false, 00:16:27.421 "nvme_io_md": false, 00:16:27.421 "write_zeroes": true, 00:16:27.421 "zcopy": true, 00:16:27.421 "get_zone_info": false, 00:16:27.421 "zone_management": false, 00:16:27.421 "zone_append": false, 00:16:27.421 "compare": false, 00:16:27.421 "compare_and_write": false, 00:16:27.421 "abort": true, 00:16:27.421 "seek_hole": false, 00:16:27.421 "seek_data": false, 00:16:27.421 "copy": true, 00:16:27.421 "nvme_iov_md": false 00:16:27.421 }, 00:16:27.421 "memory_domains": [ 00:16:27.421 { 00:16:27.421 "dma_device_id": "system", 00:16:27.421 "dma_device_type": 1 00:16:27.421 }, 00:16:27.421 { 00:16:27.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.421 "dma_device_type": 2 00:16:27.421 } 00:16:27.421 ], 00:16:27.421 "driver_specific": {} 00:16:27.421 } 00:16:27.421 ] 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.421 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.422 "name": "Existed_Raid", 00:16:27.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.422 "strip_size_kb": 64, 00:16:27.422 "state": "configuring", 00:16:27.422 "raid_level": "raid5f", 00:16:27.422 "superblock": false, 00:16:27.422 "num_base_bdevs": 4, 00:16:27.422 "num_base_bdevs_discovered": 1, 00:16:27.422 "num_base_bdevs_operational": 4, 00:16:27.422 "base_bdevs_list": [ 00:16:27.422 { 00:16:27.422 "name": "BaseBdev1", 00:16:27.422 "uuid": "8f0a7605-f957-4546-a84a-1b3e7b3f2316", 00:16:27.422 "is_configured": true, 00:16:27.422 "data_offset": 0, 00:16:27.422 "data_size": 65536 00:16:27.422 }, 00:16:27.422 { 00:16:27.422 "name": "BaseBdev2", 00:16:27.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.422 "is_configured": false, 00:16:27.422 "data_offset": 0, 00:16:27.422 "data_size": 0 00:16:27.422 }, 00:16:27.422 { 00:16:27.422 "name": "BaseBdev3", 00:16:27.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.422 "is_configured": false, 00:16:27.422 "data_offset": 0, 00:16:27.422 "data_size": 0 00:16:27.422 }, 00:16:27.422 { 00:16:27.422 "name": "BaseBdev4", 00:16:27.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.422 "is_configured": false, 00:16:27.422 "data_offset": 0, 00:16:27.422 "data_size": 0 00:16:27.422 } 00:16:27.422 ] 00:16:27.422 }' 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.422 01:36:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.680 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.680 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.680 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.680 [2024-11-17 01:36:36.121281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.680 [2024-11-17 01:36:36.121333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:27.680 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.680 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:27.680 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.680 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.680 [2024-11-17 01:36:36.133340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.680 [2024-11-17 01:36:36.135130] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.680 [2024-11-17 01:36:36.135194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.680 [2024-11-17 01:36:36.135204] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.680 [2024-11-17 01:36:36.135230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.680 [2024-11-17 01:36:36.135237] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:27.681 [2024-11-17 01:36:36.135245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:27.681 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.681 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:27.681 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.681 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.939 "name": "Existed_Raid", 00:16:27.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.939 "strip_size_kb": 64, 00:16:27.939 "state": "configuring", 00:16:27.939 "raid_level": "raid5f", 00:16:27.939 "superblock": false, 00:16:27.939 "num_base_bdevs": 4, 00:16:27.939 "num_base_bdevs_discovered": 1, 00:16:27.939 "num_base_bdevs_operational": 4, 00:16:27.939 "base_bdevs_list": [ 00:16:27.939 { 00:16:27.939 "name": "BaseBdev1", 00:16:27.939 "uuid": "8f0a7605-f957-4546-a84a-1b3e7b3f2316", 00:16:27.939 "is_configured": true, 00:16:27.939 "data_offset": 0, 00:16:27.939 "data_size": 65536 00:16:27.939 }, 00:16:27.939 { 00:16:27.939 "name": "BaseBdev2", 00:16:27.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.939 "is_configured": false, 00:16:27.939 "data_offset": 0, 00:16:27.939 "data_size": 0 00:16:27.939 }, 00:16:27.939 { 00:16:27.939 "name": "BaseBdev3", 00:16:27.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.939 "is_configured": false, 00:16:27.939 "data_offset": 0, 00:16:27.939 "data_size": 0 00:16:27.939 }, 00:16:27.939 { 00:16:27.939 "name": "BaseBdev4", 00:16:27.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.939 "is_configured": false, 00:16:27.939 "data_offset": 0, 00:16:27.939 "data_size": 0 00:16:27.939 } 00:16:27.939 ] 00:16:27.939 }' 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.939 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.198 [2024-11-17 01:36:36.608198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.198 BaseBdev2 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.198 [ 00:16:28.198 { 00:16:28.198 "name": "BaseBdev2", 00:16:28.198 "aliases": [ 00:16:28.198 "1257c1d1-7afc-48ab-bfb2-e3132c9838e0" 00:16:28.198 ], 00:16:28.198 "product_name": "Malloc disk", 00:16:28.198 "block_size": 512, 00:16:28.198 "num_blocks": 65536, 00:16:28.198 "uuid": "1257c1d1-7afc-48ab-bfb2-e3132c9838e0", 00:16:28.198 "assigned_rate_limits": { 00:16:28.198 "rw_ios_per_sec": 0, 00:16:28.198 "rw_mbytes_per_sec": 0, 00:16:28.198 "r_mbytes_per_sec": 0, 00:16:28.198 "w_mbytes_per_sec": 0 00:16:28.198 }, 00:16:28.198 "claimed": true, 00:16:28.198 "claim_type": "exclusive_write", 00:16:28.198 "zoned": false, 00:16:28.198 "supported_io_types": { 00:16:28.198 "read": true, 00:16:28.198 "write": true, 00:16:28.198 "unmap": true, 00:16:28.198 "flush": true, 00:16:28.198 "reset": true, 00:16:28.198 "nvme_admin": false, 00:16:28.198 "nvme_io": false, 00:16:28.198 "nvme_io_md": false, 00:16:28.198 "write_zeroes": true, 00:16:28.198 "zcopy": true, 00:16:28.198 "get_zone_info": false, 00:16:28.198 "zone_management": false, 00:16:28.198 "zone_append": false, 00:16:28.198 "compare": false, 00:16:28.198 "compare_and_write": false, 00:16:28.198 "abort": true, 00:16:28.198 "seek_hole": false, 00:16:28.198 "seek_data": false, 00:16:28.198 "copy": true, 00:16:28.198 "nvme_iov_md": false 00:16:28.198 }, 00:16:28.198 "memory_domains": [ 00:16:28.198 { 00:16:28.198 "dma_device_id": "system", 00:16:28.198 "dma_device_type": 1 00:16:28.198 }, 00:16:28.198 { 00:16:28.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.198 "dma_device_type": 2 00:16:28.198 } 00:16:28.198 ], 00:16:28.198 "driver_specific": {} 00:16:28.198 } 00:16:28.198 ] 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.198 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.457 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.457 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.457 "name": "Existed_Raid", 00:16:28.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.457 "strip_size_kb": 64, 00:16:28.457 "state": "configuring", 00:16:28.457 "raid_level": "raid5f", 00:16:28.457 "superblock": false, 00:16:28.457 "num_base_bdevs": 4, 00:16:28.457 "num_base_bdevs_discovered": 2, 00:16:28.457 "num_base_bdevs_operational": 4, 00:16:28.457 "base_bdevs_list": [ 00:16:28.457 { 00:16:28.457 "name": "BaseBdev1", 00:16:28.457 "uuid": "8f0a7605-f957-4546-a84a-1b3e7b3f2316", 00:16:28.457 "is_configured": true, 00:16:28.457 "data_offset": 0, 00:16:28.457 "data_size": 65536 00:16:28.457 }, 00:16:28.457 { 00:16:28.457 "name": "BaseBdev2", 00:16:28.457 "uuid": "1257c1d1-7afc-48ab-bfb2-e3132c9838e0", 00:16:28.457 "is_configured": true, 00:16:28.457 "data_offset": 0, 00:16:28.457 "data_size": 65536 00:16:28.457 }, 00:16:28.457 { 00:16:28.457 "name": "BaseBdev3", 00:16:28.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.457 "is_configured": false, 00:16:28.457 "data_offset": 0, 00:16:28.457 "data_size": 0 00:16:28.457 }, 00:16:28.457 { 00:16:28.457 "name": "BaseBdev4", 00:16:28.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.457 "is_configured": false, 00:16:28.457 "data_offset": 0, 00:16:28.457 "data_size": 0 00:16:28.457 } 00:16:28.457 ] 00:16:28.457 }' 00:16:28.457 01:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.457 01:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.716 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.716 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.716 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.975 [2024-11-17 01:36:37.175026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.975 BaseBdev3 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.975 [ 00:16:28.975 { 00:16:28.975 "name": "BaseBdev3", 00:16:28.975 "aliases": [ 00:16:28.975 "4fcbed91-2114-4ae6-96e7-169c7a9deb91" 00:16:28.975 ], 00:16:28.975 "product_name": "Malloc disk", 00:16:28.975 "block_size": 512, 00:16:28.975 "num_blocks": 65536, 00:16:28.975 "uuid": "4fcbed91-2114-4ae6-96e7-169c7a9deb91", 00:16:28.975 "assigned_rate_limits": { 00:16:28.975 "rw_ios_per_sec": 0, 00:16:28.975 "rw_mbytes_per_sec": 0, 00:16:28.975 "r_mbytes_per_sec": 0, 00:16:28.975 "w_mbytes_per_sec": 0 00:16:28.975 }, 00:16:28.975 "claimed": true, 00:16:28.975 "claim_type": "exclusive_write", 00:16:28.975 "zoned": false, 00:16:28.975 "supported_io_types": { 00:16:28.975 "read": true, 00:16:28.975 "write": true, 00:16:28.975 "unmap": true, 00:16:28.975 "flush": true, 00:16:28.975 "reset": true, 00:16:28.975 "nvme_admin": false, 00:16:28.975 "nvme_io": false, 00:16:28.975 "nvme_io_md": false, 00:16:28.975 "write_zeroes": true, 00:16:28.975 "zcopy": true, 00:16:28.975 "get_zone_info": false, 00:16:28.975 "zone_management": false, 00:16:28.975 "zone_append": false, 00:16:28.975 "compare": false, 00:16:28.975 "compare_and_write": false, 00:16:28.975 "abort": true, 00:16:28.975 "seek_hole": false, 00:16:28.975 "seek_data": false, 00:16:28.975 "copy": true, 00:16:28.975 "nvme_iov_md": false 00:16:28.975 }, 00:16:28.975 "memory_domains": [ 00:16:28.975 { 00:16:28.975 "dma_device_id": "system", 00:16:28.975 "dma_device_type": 1 00:16:28.975 }, 00:16:28.975 { 00:16:28.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.975 "dma_device_type": 2 00:16:28.975 } 00:16:28.975 ], 00:16:28.975 "driver_specific": {} 00:16:28.975 } 00:16:28.975 ] 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.975 "name": "Existed_Raid", 00:16:28.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.975 "strip_size_kb": 64, 00:16:28.975 "state": "configuring", 00:16:28.975 "raid_level": "raid5f", 00:16:28.975 "superblock": false, 00:16:28.975 "num_base_bdevs": 4, 00:16:28.975 "num_base_bdevs_discovered": 3, 00:16:28.975 "num_base_bdevs_operational": 4, 00:16:28.975 "base_bdevs_list": [ 00:16:28.975 { 00:16:28.975 "name": "BaseBdev1", 00:16:28.975 "uuid": "8f0a7605-f957-4546-a84a-1b3e7b3f2316", 00:16:28.975 "is_configured": true, 00:16:28.975 "data_offset": 0, 00:16:28.975 "data_size": 65536 00:16:28.975 }, 00:16:28.975 { 00:16:28.975 "name": "BaseBdev2", 00:16:28.975 "uuid": "1257c1d1-7afc-48ab-bfb2-e3132c9838e0", 00:16:28.975 "is_configured": true, 00:16:28.975 "data_offset": 0, 00:16:28.975 "data_size": 65536 00:16:28.975 }, 00:16:28.975 { 00:16:28.975 "name": "BaseBdev3", 00:16:28.975 "uuid": "4fcbed91-2114-4ae6-96e7-169c7a9deb91", 00:16:28.975 "is_configured": true, 00:16:28.975 "data_offset": 0, 00:16:28.975 "data_size": 65536 00:16:28.975 }, 00:16:28.975 { 00:16:28.975 "name": "BaseBdev4", 00:16:28.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.975 "is_configured": false, 00:16:28.975 "data_offset": 0, 00:16:28.975 "data_size": 0 00:16:28.975 } 00:16:28.975 ] 00:16:28.975 }' 00:16:28.975 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.976 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.235 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:29.235 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.235 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.235 [2024-11-17 01:36:37.692775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:29.235 [2024-11-17 01:36:37.692937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:29.235 [2024-11-17 01:36:37.692966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:29.235 [2024-11-17 01:36:37.693254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:29.495 [2024-11-17 01:36:37.700200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:29.495 [2024-11-17 01:36:37.700262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:29.495 [2024-11-17 01:36:37.700574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.495 BaseBdev4 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 [ 00:16:29.495 { 00:16:29.495 "name": "BaseBdev4", 00:16:29.495 "aliases": [ 00:16:29.495 "f2650f06-e1a0-4bdc-abd2-692b509b677e" 00:16:29.495 ], 00:16:29.495 "product_name": "Malloc disk", 00:16:29.495 "block_size": 512, 00:16:29.495 "num_blocks": 65536, 00:16:29.495 "uuid": "f2650f06-e1a0-4bdc-abd2-692b509b677e", 00:16:29.495 "assigned_rate_limits": { 00:16:29.495 "rw_ios_per_sec": 0, 00:16:29.495 "rw_mbytes_per_sec": 0, 00:16:29.495 "r_mbytes_per_sec": 0, 00:16:29.495 "w_mbytes_per_sec": 0 00:16:29.495 }, 00:16:29.495 "claimed": true, 00:16:29.495 "claim_type": "exclusive_write", 00:16:29.495 "zoned": false, 00:16:29.495 "supported_io_types": { 00:16:29.495 "read": true, 00:16:29.495 "write": true, 00:16:29.495 "unmap": true, 00:16:29.495 "flush": true, 00:16:29.495 "reset": true, 00:16:29.495 "nvme_admin": false, 00:16:29.495 "nvme_io": false, 00:16:29.495 "nvme_io_md": false, 00:16:29.495 "write_zeroes": true, 00:16:29.495 "zcopy": true, 00:16:29.495 "get_zone_info": false, 00:16:29.495 "zone_management": false, 00:16:29.495 "zone_append": false, 00:16:29.495 "compare": false, 00:16:29.495 "compare_and_write": false, 00:16:29.495 "abort": true, 00:16:29.495 "seek_hole": false, 00:16:29.495 "seek_data": false, 00:16:29.495 "copy": true, 00:16:29.495 "nvme_iov_md": false 00:16:29.495 }, 00:16:29.495 "memory_domains": [ 00:16:29.495 { 00:16:29.495 "dma_device_id": "system", 00:16:29.495 "dma_device_type": 1 00:16:29.495 }, 00:16:29.495 { 00:16:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.495 "dma_device_type": 2 00:16:29.495 } 00:16:29.495 ], 00:16:29.495 "driver_specific": {} 00:16:29.495 } 00:16:29.495 ] 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.495 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.495 "name": "Existed_Raid", 00:16:29.495 "uuid": "a8a62510-7781-4f0f-a51e-dfffc0892811", 00:16:29.495 "strip_size_kb": 64, 00:16:29.495 "state": "online", 00:16:29.495 "raid_level": "raid5f", 00:16:29.495 "superblock": false, 00:16:29.495 "num_base_bdevs": 4, 00:16:29.495 "num_base_bdevs_discovered": 4, 00:16:29.495 "num_base_bdevs_operational": 4, 00:16:29.495 "base_bdevs_list": [ 00:16:29.495 { 00:16:29.495 "name": "BaseBdev1", 00:16:29.495 "uuid": "8f0a7605-f957-4546-a84a-1b3e7b3f2316", 00:16:29.495 "is_configured": true, 00:16:29.495 "data_offset": 0, 00:16:29.495 "data_size": 65536 00:16:29.495 }, 00:16:29.495 { 00:16:29.495 "name": "BaseBdev2", 00:16:29.495 "uuid": "1257c1d1-7afc-48ab-bfb2-e3132c9838e0", 00:16:29.495 "is_configured": true, 00:16:29.495 "data_offset": 0, 00:16:29.495 "data_size": 65536 00:16:29.495 }, 00:16:29.495 { 00:16:29.496 "name": "BaseBdev3", 00:16:29.496 "uuid": "4fcbed91-2114-4ae6-96e7-169c7a9deb91", 00:16:29.496 "is_configured": true, 00:16:29.496 "data_offset": 0, 00:16:29.496 "data_size": 65536 00:16:29.496 }, 00:16:29.496 { 00:16:29.496 "name": "BaseBdev4", 00:16:29.496 "uuid": "f2650f06-e1a0-4bdc-abd2-692b509b677e", 00:16:29.496 "is_configured": true, 00:16:29.496 "data_offset": 0, 00:16:29.496 "data_size": 65536 00:16:29.496 } 00:16:29.496 ] 00:16:29.496 }' 00:16:29.496 01:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.496 01:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.756 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.756 [2024-11-17 01:36:38.207906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.016 "name": "Existed_Raid", 00:16:30.016 "aliases": [ 00:16:30.016 "a8a62510-7781-4f0f-a51e-dfffc0892811" 00:16:30.016 ], 00:16:30.016 "product_name": "Raid Volume", 00:16:30.016 "block_size": 512, 00:16:30.016 "num_blocks": 196608, 00:16:30.016 "uuid": "a8a62510-7781-4f0f-a51e-dfffc0892811", 00:16:30.016 "assigned_rate_limits": { 00:16:30.016 "rw_ios_per_sec": 0, 00:16:30.016 "rw_mbytes_per_sec": 0, 00:16:30.016 "r_mbytes_per_sec": 0, 00:16:30.016 "w_mbytes_per_sec": 0 00:16:30.016 }, 00:16:30.016 "claimed": false, 00:16:30.016 "zoned": false, 00:16:30.016 "supported_io_types": { 00:16:30.016 "read": true, 00:16:30.016 "write": true, 00:16:30.016 "unmap": false, 00:16:30.016 "flush": false, 00:16:30.016 "reset": true, 00:16:30.016 "nvme_admin": false, 00:16:30.016 "nvme_io": false, 00:16:30.016 "nvme_io_md": false, 00:16:30.016 "write_zeroes": true, 00:16:30.016 "zcopy": false, 00:16:30.016 "get_zone_info": false, 00:16:30.016 "zone_management": false, 00:16:30.016 "zone_append": false, 00:16:30.016 "compare": false, 00:16:30.016 "compare_and_write": false, 00:16:30.016 "abort": false, 00:16:30.016 "seek_hole": false, 00:16:30.016 "seek_data": false, 00:16:30.016 "copy": false, 00:16:30.016 "nvme_iov_md": false 00:16:30.016 }, 00:16:30.016 "driver_specific": { 00:16:30.016 "raid": { 00:16:30.016 "uuid": "a8a62510-7781-4f0f-a51e-dfffc0892811", 00:16:30.016 "strip_size_kb": 64, 00:16:30.016 "state": "online", 00:16:30.016 "raid_level": "raid5f", 00:16:30.016 "superblock": false, 00:16:30.016 "num_base_bdevs": 4, 00:16:30.016 "num_base_bdevs_discovered": 4, 00:16:30.016 "num_base_bdevs_operational": 4, 00:16:30.016 "base_bdevs_list": [ 00:16:30.016 { 00:16:30.016 "name": "BaseBdev1", 00:16:30.016 "uuid": "8f0a7605-f957-4546-a84a-1b3e7b3f2316", 00:16:30.016 "is_configured": true, 00:16:30.016 "data_offset": 0, 00:16:30.016 "data_size": 65536 00:16:30.016 }, 00:16:30.016 { 00:16:30.016 "name": "BaseBdev2", 00:16:30.016 "uuid": "1257c1d1-7afc-48ab-bfb2-e3132c9838e0", 00:16:30.016 "is_configured": true, 00:16:30.016 "data_offset": 0, 00:16:30.016 "data_size": 65536 00:16:30.016 }, 00:16:30.016 { 00:16:30.016 "name": "BaseBdev3", 00:16:30.016 "uuid": "4fcbed91-2114-4ae6-96e7-169c7a9deb91", 00:16:30.016 "is_configured": true, 00:16:30.016 "data_offset": 0, 00:16:30.016 "data_size": 65536 00:16:30.016 }, 00:16:30.016 { 00:16:30.016 "name": "BaseBdev4", 00:16:30.016 "uuid": "f2650f06-e1a0-4bdc-abd2-692b509b677e", 00:16:30.016 "is_configured": true, 00:16:30.016 "data_offset": 0, 00:16:30.016 "data_size": 65536 00:16:30.016 } 00:16:30.016 ] 00:16:30.016 } 00:16:30.016 } 00:16:30.016 }' 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:30.016 BaseBdev2 00:16:30.016 BaseBdev3 00:16:30.016 BaseBdev4' 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.016 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.017 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 [2024-11-17 01:36:38.499358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.277 "name": "Existed_Raid", 00:16:30.277 "uuid": "a8a62510-7781-4f0f-a51e-dfffc0892811", 00:16:30.277 "strip_size_kb": 64, 00:16:30.277 "state": "online", 00:16:30.277 "raid_level": "raid5f", 00:16:30.277 "superblock": false, 00:16:30.277 "num_base_bdevs": 4, 00:16:30.277 "num_base_bdevs_discovered": 3, 00:16:30.277 "num_base_bdevs_operational": 3, 00:16:30.277 "base_bdevs_list": [ 00:16:30.277 { 00:16:30.277 "name": null, 00:16:30.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.277 "is_configured": false, 00:16:30.277 "data_offset": 0, 00:16:30.277 "data_size": 65536 00:16:30.277 }, 00:16:30.277 { 00:16:30.277 "name": "BaseBdev2", 00:16:30.277 "uuid": "1257c1d1-7afc-48ab-bfb2-e3132c9838e0", 00:16:30.277 "is_configured": true, 00:16:30.277 "data_offset": 0, 00:16:30.277 "data_size": 65536 00:16:30.277 }, 00:16:30.277 { 00:16:30.277 "name": "BaseBdev3", 00:16:30.277 "uuid": "4fcbed91-2114-4ae6-96e7-169c7a9deb91", 00:16:30.277 "is_configured": true, 00:16:30.277 "data_offset": 0, 00:16:30.277 "data_size": 65536 00:16:30.277 }, 00:16:30.277 { 00:16:30.277 "name": "BaseBdev4", 00:16:30.277 "uuid": "f2650f06-e1a0-4bdc-abd2-692b509b677e", 00:16:30.277 "is_configured": true, 00:16:30.277 "data_offset": 0, 00:16:30.277 "data_size": 65536 00:16:30.277 } 00:16:30.277 ] 00:16:30.277 }' 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.277 01:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.847 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:30.847 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:30.847 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 [2024-11-17 01:36:39.124286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:30.848 [2024-11-17 01:36:39.124446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.848 [2024-11-17 01:36:39.222953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.848 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.848 [2024-11-17 01:36:39.286913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:31.107 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.107 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.107 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.107 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.107 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.108 [2024-11-17 01:36:39.448956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:31.108 [2024-11-17 01:36:39.449057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.108 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 BaseBdev2 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 [ 00:16:31.368 { 00:16:31.368 "name": "BaseBdev2", 00:16:31.368 "aliases": [ 00:16:31.368 "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c" 00:16:31.368 ], 00:16:31.368 "product_name": "Malloc disk", 00:16:31.368 "block_size": 512, 00:16:31.368 "num_blocks": 65536, 00:16:31.368 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:31.368 "assigned_rate_limits": { 00:16:31.368 "rw_ios_per_sec": 0, 00:16:31.368 "rw_mbytes_per_sec": 0, 00:16:31.368 "r_mbytes_per_sec": 0, 00:16:31.368 "w_mbytes_per_sec": 0 00:16:31.368 }, 00:16:31.368 "claimed": false, 00:16:31.368 "zoned": false, 00:16:31.368 "supported_io_types": { 00:16:31.368 "read": true, 00:16:31.368 "write": true, 00:16:31.368 "unmap": true, 00:16:31.368 "flush": true, 00:16:31.368 "reset": true, 00:16:31.368 "nvme_admin": false, 00:16:31.368 "nvme_io": false, 00:16:31.368 "nvme_io_md": false, 00:16:31.368 "write_zeroes": true, 00:16:31.368 "zcopy": true, 00:16:31.368 "get_zone_info": false, 00:16:31.368 "zone_management": false, 00:16:31.368 "zone_append": false, 00:16:31.368 "compare": false, 00:16:31.368 "compare_and_write": false, 00:16:31.368 "abort": true, 00:16:31.368 "seek_hole": false, 00:16:31.368 "seek_data": false, 00:16:31.368 "copy": true, 00:16:31.368 "nvme_iov_md": false 00:16:31.368 }, 00:16:31.368 "memory_domains": [ 00:16:31.368 { 00:16:31.368 "dma_device_id": "system", 00:16:31.368 "dma_device_type": 1 00:16:31.368 }, 00:16:31.368 { 00:16:31.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.368 "dma_device_type": 2 00:16:31.368 } 00:16:31.368 ], 00:16:31.368 "driver_specific": {} 00:16:31.368 } 00:16:31.368 ] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 BaseBdev3 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 [ 00:16:31.368 { 00:16:31.368 "name": "BaseBdev3", 00:16:31.368 "aliases": [ 00:16:31.368 "4d8da87c-7f7b-4940-b28a-2345b986b1e8" 00:16:31.368 ], 00:16:31.368 "product_name": "Malloc disk", 00:16:31.368 "block_size": 512, 00:16:31.368 "num_blocks": 65536, 00:16:31.368 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:31.368 "assigned_rate_limits": { 00:16:31.368 "rw_ios_per_sec": 0, 00:16:31.368 "rw_mbytes_per_sec": 0, 00:16:31.368 "r_mbytes_per_sec": 0, 00:16:31.368 "w_mbytes_per_sec": 0 00:16:31.368 }, 00:16:31.368 "claimed": false, 00:16:31.368 "zoned": false, 00:16:31.368 "supported_io_types": { 00:16:31.368 "read": true, 00:16:31.368 "write": true, 00:16:31.368 "unmap": true, 00:16:31.368 "flush": true, 00:16:31.368 "reset": true, 00:16:31.368 "nvme_admin": false, 00:16:31.368 "nvme_io": false, 00:16:31.368 "nvme_io_md": false, 00:16:31.368 "write_zeroes": true, 00:16:31.368 "zcopy": true, 00:16:31.368 "get_zone_info": false, 00:16:31.368 "zone_management": false, 00:16:31.368 "zone_append": false, 00:16:31.368 "compare": false, 00:16:31.368 "compare_and_write": false, 00:16:31.368 "abort": true, 00:16:31.368 "seek_hole": false, 00:16:31.368 "seek_data": false, 00:16:31.368 "copy": true, 00:16:31.368 "nvme_iov_md": false 00:16:31.368 }, 00:16:31.368 "memory_domains": [ 00:16:31.368 { 00:16:31.368 "dma_device_id": "system", 00:16:31.368 "dma_device_type": 1 00:16:31.368 }, 00:16:31.368 { 00:16:31.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.368 "dma_device_type": 2 00:16:31.368 } 00:16:31.368 ], 00:16:31.368 "driver_specific": {} 00:16:31.368 } 00:16:31.368 ] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.368 BaseBdev4 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.368 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.369 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.369 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.369 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.369 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:31.369 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.629 [ 00:16:31.629 { 00:16:31.629 "name": "BaseBdev4", 00:16:31.629 "aliases": [ 00:16:31.629 "6effa91f-5a66-4d79-a699-f73b3f8779b9" 00:16:31.629 ], 00:16:31.629 "product_name": "Malloc disk", 00:16:31.629 "block_size": 512, 00:16:31.629 "num_blocks": 65536, 00:16:31.629 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:31.629 "assigned_rate_limits": { 00:16:31.629 "rw_ios_per_sec": 0, 00:16:31.629 "rw_mbytes_per_sec": 0, 00:16:31.629 "r_mbytes_per_sec": 0, 00:16:31.629 "w_mbytes_per_sec": 0 00:16:31.629 }, 00:16:31.629 "claimed": false, 00:16:31.629 "zoned": false, 00:16:31.629 "supported_io_types": { 00:16:31.629 "read": true, 00:16:31.629 "write": true, 00:16:31.629 "unmap": true, 00:16:31.629 "flush": true, 00:16:31.629 "reset": true, 00:16:31.629 "nvme_admin": false, 00:16:31.629 "nvme_io": false, 00:16:31.629 "nvme_io_md": false, 00:16:31.629 "write_zeroes": true, 00:16:31.629 "zcopy": true, 00:16:31.629 "get_zone_info": false, 00:16:31.629 "zone_management": false, 00:16:31.629 "zone_append": false, 00:16:31.629 "compare": false, 00:16:31.629 "compare_and_write": false, 00:16:31.629 "abort": true, 00:16:31.629 "seek_hole": false, 00:16:31.629 "seek_data": false, 00:16:31.629 "copy": true, 00:16:31.629 "nvme_iov_md": false 00:16:31.629 }, 00:16:31.629 "memory_domains": [ 00:16:31.629 { 00:16:31.629 "dma_device_id": "system", 00:16:31.629 "dma_device_type": 1 00:16:31.629 }, 00:16:31.629 { 00:16:31.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.629 "dma_device_type": 2 00:16:31.629 } 00:16:31.629 ], 00:16:31.629 "driver_specific": {} 00:16:31.629 } 00:16:31.629 ] 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.629 [2024-11-17 01:36:39.855923] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.629 [2024-11-17 01:36:39.856060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.629 [2024-11-17 01:36:39.856105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.629 [2024-11-17 01:36:39.858148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.629 [2024-11-17 01:36:39.858243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.629 "name": "Existed_Raid", 00:16:31.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.629 "strip_size_kb": 64, 00:16:31.629 "state": "configuring", 00:16:31.629 "raid_level": "raid5f", 00:16:31.629 "superblock": false, 00:16:31.629 "num_base_bdevs": 4, 00:16:31.629 "num_base_bdevs_discovered": 3, 00:16:31.629 "num_base_bdevs_operational": 4, 00:16:31.629 "base_bdevs_list": [ 00:16:31.629 { 00:16:31.629 "name": "BaseBdev1", 00:16:31.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.629 "is_configured": false, 00:16:31.629 "data_offset": 0, 00:16:31.629 "data_size": 0 00:16:31.629 }, 00:16:31.629 { 00:16:31.629 "name": "BaseBdev2", 00:16:31.629 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:31.629 "is_configured": true, 00:16:31.629 "data_offset": 0, 00:16:31.629 "data_size": 65536 00:16:31.629 }, 00:16:31.629 { 00:16:31.629 "name": "BaseBdev3", 00:16:31.629 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:31.629 "is_configured": true, 00:16:31.629 "data_offset": 0, 00:16:31.629 "data_size": 65536 00:16:31.629 }, 00:16:31.629 { 00:16:31.629 "name": "BaseBdev4", 00:16:31.629 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:31.629 "is_configured": true, 00:16:31.629 "data_offset": 0, 00:16:31.629 "data_size": 65536 00:16:31.629 } 00:16:31.629 ] 00:16:31.629 }' 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.629 01:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.889 [2024-11-17 01:36:40.315321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.889 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.890 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.890 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.890 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.890 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.890 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.890 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.150 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.150 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.150 "name": "Existed_Raid", 00:16:32.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.150 "strip_size_kb": 64, 00:16:32.150 "state": "configuring", 00:16:32.150 "raid_level": "raid5f", 00:16:32.150 "superblock": false, 00:16:32.150 "num_base_bdevs": 4, 00:16:32.150 "num_base_bdevs_discovered": 2, 00:16:32.150 "num_base_bdevs_operational": 4, 00:16:32.150 "base_bdevs_list": [ 00:16:32.150 { 00:16:32.150 "name": "BaseBdev1", 00:16:32.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.150 "is_configured": false, 00:16:32.150 "data_offset": 0, 00:16:32.150 "data_size": 0 00:16:32.150 }, 00:16:32.150 { 00:16:32.150 "name": null, 00:16:32.150 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:32.150 "is_configured": false, 00:16:32.150 "data_offset": 0, 00:16:32.150 "data_size": 65536 00:16:32.150 }, 00:16:32.150 { 00:16:32.150 "name": "BaseBdev3", 00:16:32.150 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:32.150 "is_configured": true, 00:16:32.150 "data_offset": 0, 00:16:32.150 "data_size": 65536 00:16:32.150 }, 00:16:32.150 { 00:16:32.150 "name": "BaseBdev4", 00:16:32.150 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:32.150 "is_configured": true, 00:16:32.150 "data_offset": 0, 00:16:32.150 "data_size": 65536 00:16:32.150 } 00:16:32.150 ] 00:16:32.150 }' 00:16:32.150 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.150 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.410 [2024-11-17 01:36:40.822977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.410 BaseBdev1 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.410 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.410 [ 00:16:32.410 { 00:16:32.410 "name": "BaseBdev1", 00:16:32.410 "aliases": [ 00:16:32.410 "9f4634bf-2ace-4384-af22-ebbd6c188a92" 00:16:32.410 ], 00:16:32.410 "product_name": "Malloc disk", 00:16:32.410 "block_size": 512, 00:16:32.410 "num_blocks": 65536, 00:16:32.410 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:32.410 "assigned_rate_limits": { 00:16:32.410 "rw_ios_per_sec": 0, 00:16:32.410 "rw_mbytes_per_sec": 0, 00:16:32.410 "r_mbytes_per_sec": 0, 00:16:32.410 "w_mbytes_per_sec": 0 00:16:32.410 }, 00:16:32.410 "claimed": true, 00:16:32.410 "claim_type": "exclusive_write", 00:16:32.410 "zoned": false, 00:16:32.410 "supported_io_types": { 00:16:32.410 "read": true, 00:16:32.410 "write": true, 00:16:32.410 "unmap": true, 00:16:32.410 "flush": true, 00:16:32.410 "reset": true, 00:16:32.410 "nvme_admin": false, 00:16:32.410 "nvme_io": false, 00:16:32.410 "nvme_io_md": false, 00:16:32.410 "write_zeroes": true, 00:16:32.410 "zcopy": true, 00:16:32.410 "get_zone_info": false, 00:16:32.410 "zone_management": false, 00:16:32.410 "zone_append": false, 00:16:32.410 "compare": false, 00:16:32.410 "compare_and_write": false, 00:16:32.410 "abort": true, 00:16:32.410 "seek_hole": false, 00:16:32.410 "seek_data": false, 00:16:32.410 "copy": true, 00:16:32.410 "nvme_iov_md": false 00:16:32.410 }, 00:16:32.410 "memory_domains": [ 00:16:32.410 { 00:16:32.410 "dma_device_id": "system", 00:16:32.410 "dma_device_type": 1 00:16:32.410 }, 00:16:32.410 { 00:16:32.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.410 "dma_device_type": 2 00:16:32.410 } 00:16:32.410 ], 00:16:32.410 "driver_specific": {} 00:16:32.410 } 00:16:32.410 ] 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.411 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.670 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.670 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.670 "name": "Existed_Raid", 00:16:32.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.671 "strip_size_kb": 64, 00:16:32.671 "state": "configuring", 00:16:32.671 "raid_level": "raid5f", 00:16:32.671 "superblock": false, 00:16:32.671 "num_base_bdevs": 4, 00:16:32.671 "num_base_bdevs_discovered": 3, 00:16:32.671 "num_base_bdevs_operational": 4, 00:16:32.671 "base_bdevs_list": [ 00:16:32.671 { 00:16:32.671 "name": "BaseBdev1", 00:16:32.671 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:32.671 "is_configured": true, 00:16:32.671 "data_offset": 0, 00:16:32.671 "data_size": 65536 00:16:32.671 }, 00:16:32.671 { 00:16:32.671 "name": null, 00:16:32.671 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:32.671 "is_configured": false, 00:16:32.671 "data_offset": 0, 00:16:32.671 "data_size": 65536 00:16:32.671 }, 00:16:32.671 { 00:16:32.671 "name": "BaseBdev3", 00:16:32.671 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:32.671 "is_configured": true, 00:16:32.671 "data_offset": 0, 00:16:32.671 "data_size": 65536 00:16:32.671 }, 00:16:32.671 { 00:16:32.671 "name": "BaseBdev4", 00:16:32.671 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:32.671 "is_configured": true, 00:16:32.671 "data_offset": 0, 00:16:32.671 "data_size": 65536 00:16:32.671 } 00:16:32.671 ] 00:16:32.671 }' 00:16:32.671 01:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.671 01:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.930 [2024-11-17 01:36:41.330124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.930 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.931 "name": "Existed_Raid", 00:16:32.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.931 "strip_size_kb": 64, 00:16:32.931 "state": "configuring", 00:16:32.931 "raid_level": "raid5f", 00:16:32.931 "superblock": false, 00:16:32.931 "num_base_bdevs": 4, 00:16:32.931 "num_base_bdevs_discovered": 2, 00:16:32.931 "num_base_bdevs_operational": 4, 00:16:32.931 "base_bdevs_list": [ 00:16:32.931 { 00:16:32.931 "name": "BaseBdev1", 00:16:32.931 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:32.931 "is_configured": true, 00:16:32.931 "data_offset": 0, 00:16:32.931 "data_size": 65536 00:16:32.931 }, 00:16:32.931 { 00:16:32.931 "name": null, 00:16:32.931 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:32.931 "is_configured": false, 00:16:32.931 "data_offset": 0, 00:16:32.931 "data_size": 65536 00:16:32.931 }, 00:16:32.931 { 00:16:32.931 "name": null, 00:16:32.931 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:32.931 "is_configured": false, 00:16:32.931 "data_offset": 0, 00:16:32.931 "data_size": 65536 00:16:32.931 }, 00:16:32.931 { 00:16:32.931 "name": "BaseBdev4", 00:16:32.931 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:32.931 "is_configured": true, 00:16:32.931 "data_offset": 0, 00:16:32.931 "data_size": 65536 00:16:32.931 } 00:16:32.931 ] 00:16:32.931 }' 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.931 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.501 [2024-11-17 01:36:41.789356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.501 "name": "Existed_Raid", 00:16:33.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.501 "strip_size_kb": 64, 00:16:33.501 "state": "configuring", 00:16:33.501 "raid_level": "raid5f", 00:16:33.501 "superblock": false, 00:16:33.501 "num_base_bdevs": 4, 00:16:33.501 "num_base_bdevs_discovered": 3, 00:16:33.501 "num_base_bdevs_operational": 4, 00:16:33.501 "base_bdevs_list": [ 00:16:33.501 { 00:16:33.501 "name": "BaseBdev1", 00:16:33.501 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:33.501 "is_configured": true, 00:16:33.501 "data_offset": 0, 00:16:33.501 "data_size": 65536 00:16:33.501 }, 00:16:33.501 { 00:16:33.501 "name": null, 00:16:33.501 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:33.501 "is_configured": false, 00:16:33.501 "data_offset": 0, 00:16:33.501 "data_size": 65536 00:16:33.501 }, 00:16:33.501 { 00:16:33.501 "name": "BaseBdev3", 00:16:33.501 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:33.501 "is_configured": true, 00:16:33.501 "data_offset": 0, 00:16:33.501 "data_size": 65536 00:16:33.501 }, 00:16:33.501 { 00:16:33.501 "name": "BaseBdev4", 00:16:33.501 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:33.501 "is_configured": true, 00:16:33.501 "data_offset": 0, 00:16:33.501 "data_size": 65536 00:16:33.501 } 00:16:33.501 ] 00:16:33.501 }' 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.501 01:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.072 [2024-11-17 01:36:42.276529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.072 "name": "Existed_Raid", 00:16:34.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.072 "strip_size_kb": 64, 00:16:34.072 "state": "configuring", 00:16:34.072 "raid_level": "raid5f", 00:16:34.072 "superblock": false, 00:16:34.072 "num_base_bdevs": 4, 00:16:34.072 "num_base_bdevs_discovered": 2, 00:16:34.072 "num_base_bdevs_operational": 4, 00:16:34.072 "base_bdevs_list": [ 00:16:34.072 { 00:16:34.072 "name": null, 00:16:34.072 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:34.072 "is_configured": false, 00:16:34.072 "data_offset": 0, 00:16:34.072 "data_size": 65536 00:16:34.072 }, 00:16:34.072 { 00:16:34.072 "name": null, 00:16:34.072 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:34.072 "is_configured": false, 00:16:34.072 "data_offset": 0, 00:16:34.072 "data_size": 65536 00:16:34.072 }, 00:16:34.072 { 00:16:34.072 "name": "BaseBdev3", 00:16:34.072 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:34.072 "is_configured": true, 00:16:34.072 "data_offset": 0, 00:16:34.072 "data_size": 65536 00:16:34.072 }, 00:16:34.072 { 00:16:34.072 "name": "BaseBdev4", 00:16:34.072 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:34.072 "is_configured": true, 00:16:34.072 "data_offset": 0, 00:16:34.072 "data_size": 65536 00:16:34.072 } 00:16:34.072 ] 00:16:34.072 }' 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.072 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.642 [2024-11-17 01:36:42.861924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.642 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.642 "name": "Existed_Raid", 00:16:34.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.642 "strip_size_kb": 64, 00:16:34.642 "state": "configuring", 00:16:34.642 "raid_level": "raid5f", 00:16:34.642 "superblock": false, 00:16:34.642 "num_base_bdevs": 4, 00:16:34.642 "num_base_bdevs_discovered": 3, 00:16:34.642 "num_base_bdevs_operational": 4, 00:16:34.642 "base_bdevs_list": [ 00:16:34.642 { 00:16:34.642 "name": null, 00:16:34.642 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:34.642 "is_configured": false, 00:16:34.642 "data_offset": 0, 00:16:34.642 "data_size": 65536 00:16:34.642 }, 00:16:34.642 { 00:16:34.642 "name": "BaseBdev2", 00:16:34.642 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:34.642 "is_configured": true, 00:16:34.642 "data_offset": 0, 00:16:34.642 "data_size": 65536 00:16:34.642 }, 00:16:34.643 { 00:16:34.643 "name": "BaseBdev3", 00:16:34.643 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:34.643 "is_configured": true, 00:16:34.643 "data_offset": 0, 00:16:34.643 "data_size": 65536 00:16:34.643 }, 00:16:34.643 { 00:16:34.643 "name": "BaseBdev4", 00:16:34.643 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:34.643 "is_configured": true, 00:16:34.643 "data_offset": 0, 00:16:34.643 "data_size": 65536 00:16:34.643 } 00:16:34.643 ] 00:16:34.643 }' 00:16:34.643 01:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.643 01:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.903 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f4634bf-2ace-4384-af22-ebbd6c188a92 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 [2024-11-17 01:36:43.441161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:35.164 [2024-11-17 01:36:43.441308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:35.164 [2024-11-17 01:36:43.441333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:35.164 [2024-11-17 01:36:43.441623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:35.164 [2024-11-17 01:36:43.448224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:35.164 [2024-11-17 01:36:43.448287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:35.164 [2024-11-17 01:36:43.448618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.164 NewBaseBdev 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 [ 00:16:35.164 { 00:16:35.164 "name": "NewBaseBdev", 00:16:35.164 "aliases": [ 00:16:35.164 "9f4634bf-2ace-4384-af22-ebbd6c188a92" 00:16:35.164 ], 00:16:35.164 "product_name": "Malloc disk", 00:16:35.164 "block_size": 512, 00:16:35.164 "num_blocks": 65536, 00:16:35.164 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:35.164 "assigned_rate_limits": { 00:16:35.164 "rw_ios_per_sec": 0, 00:16:35.164 "rw_mbytes_per_sec": 0, 00:16:35.164 "r_mbytes_per_sec": 0, 00:16:35.164 "w_mbytes_per_sec": 0 00:16:35.164 }, 00:16:35.164 "claimed": true, 00:16:35.164 "claim_type": "exclusive_write", 00:16:35.164 "zoned": false, 00:16:35.164 "supported_io_types": { 00:16:35.164 "read": true, 00:16:35.164 "write": true, 00:16:35.164 "unmap": true, 00:16:35.164 "flush": true, 00:16:35.164 "reset": true, 00:16:35.164 "nvme_admin": false, 00:16:35.164 "nvme_io": false, 00:16:35.164 "nvme_io_md": false, 00:16:35.164 "write_zeroes": true, 00:16:35.164 "zcopy": true, 00:16:35.164 "get_zone_info": false, 00:16:35.164 "zone_management": false, 00:16:35.164 "zone_append": false, 00:16:35.164 "compare": false, 00:16:35.164 "compare_and_write": false, 00:16:35.164 "abort": true, 00:16:35.164 "seek_hole": false, 00:16:35.164 "seek_data": false, 00:16:35.164 "copy": true, 00:16:35.164 "nvme_iov_md": false 00:16:35.164 }, 00:16:35.164 "memory_domains": [ 00:16:35.164 { 00:16:35.164 "dma_device_id": "system", 00:16:35.164 "dma_device_type": 1 00:16:35.164 }, 00:16:35.164 { 00:16:35.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.164 "dma_device_type": 2 00:16:35.164 } 00:16:35.164 ], 00:16:35.164 "driver_specific": {} 00:16:35.164 } 00:16:35.164 ] 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.164 "name": "Existed_Raid", 00:16:35.164 "uuid": "e9ef2019-0399-49b5-b473-793fdfd4bb33", 00:16:35.164 "strip_size_kb": 64, 00:16:35.164 "state": "online", 00:16:35.164 "raid_level": "raid5f", 00:16:35.164 "superblock": false, 00:16:35.164 "num_base_bdevs": 4, 00:16:35.164 "num_base_bdevs_discovered": 4, 00:16:35.164 "num_base_bdevs_operational": 4, 00:16:35.164 "base_bdevs_list": [ 00:16:35.164 { 00:16:35.164 "name": "NewBaseBdev", 00:16:35.164 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:35.164 "is_configured": true, 00:16:35.164 "data_offset": 0, 00:16:35.164 "data_size": 65536 00:16:35.164 }, 00:16:35.164 { 00:16:35.164 "name": "BaseBdev2", 00:16:35.164 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:35.164 "is_configured": true, 00:16:35.164 "data_offset": 0, 00:16:35.164 "data_size": 65536 00:16:35.164 }, 00:16:35.164 { 00:16:35.164 "name": "BaseBdev3", 00:16:35.164 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:35.164 "is_configured": true, 00:16:35.164 "data_offset": 0, 00:16:35.164 "data_size": 65536 00:16:35.164 }, 00:16:35.164 { 00:16:35.164 "name": "BaseBdev4", 00:16:35.164 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:35.164 "is_configured": true, 00:16:35.164 "data_offset": 0, 00:16:35.164 "data_size": 65536 00:16:35.164 } 00:16:35.164 ] 00:16:35.164 }' 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.164 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.735 [2024-11-17 01:36:43.960141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.735 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.735 "name": "Existed_Raid", 00:16:35.735 "aliases": [ 00:16:35.735 "e9ef2019-0399-49b5-b473-793fdfd4bb33" 00:16:35.735 ], 00:16:35.735 "product_name": "Raid Volume", 00:16:35.735 "block_size": 512, 00:16:35.735 "num_blocks": 196608, 00:16:35.735 "uuid": "e9ef2019-0399-49b5-b473-793fdfd4bb33", 00:16:35.735 "assigned_rate_limits": { 00:16:35.735 "rw_ios_per_sec": 0, 00:16:35.735 "rw_mbytes_per_sec": 0, 00:16:35.735 "r_mbytes_per_sec": 0, 00:16:35.735 "w_mbytes_per_sec": 0 00:16:35.735 }, 00:16:35.735 "claimed": false, 00:16:35.735 "zoned": false, 00:16:35.735 "supported_io_types": { 00:16:35.735 "read": true, 00:16:35.735 "write": true, 00:16:35.735 "unmap": false, 00:16:35.735 "flush": false, 00:16:35.735 "reset": true, 00:16:35.735 "nvme_admin": false, 00:16:35.735 "nvme_io": false, 00:16:35.735 "nvme_io_md": false, 00:16:35.735 "write_zeroes": true, 00:16:35.735 "zcopy": false, 00:16:35.735 "get_zone_info": false, 00:16:35.735 "zone_management": false, 00:16:35.735 "zone_append": false, 00:16:35.735 "compare": false, 00:16:35.735 "compare_and_write": false, 00:16:35.735 "abort": false, 00:16:35.735 "seek_hole": false, 00:16:35.735 "seek_data": false, 00:16:35.736 "copy": false, 00:16:35.736 "nvme_iov_md": false 00:16:35.736 }, 00:16:35.736 "driver_specific": { 00:16:35.736 "raid": { 00:16:35.736 "uuid": "e9ef2019-0399-49b5-b473-793fdfd4bb33", 00:16:35.736 "strip_size_kb": 64, 00:16:35.736 "state": "online", 00:16:35.736 "raid_level": "raid5f", 00:16:35.736 "superblock": false, 00:16:35.736 "num_base_bdevs": 4, 00:16:35.736 "num_base_bdevs_discovered": 4, 00:16:35.736 "num_base_bdevs_operational": 4, 00:16:35.736 "base_bdevs_list": [ 00:16:35.736 { 00:16:35.736 "name": "NewBaseBdev", 00:16:35.736 "uuid": "9f4634bf-2ace-4384-af22-ebbd6c188a92", 00:16:35.736 "is_configured": true, 00:16:35.736 "data_offset": 0, 00:16:35.736 "data_size": 65536 00:16:35.736 }, 00:16:35.736 { 00:16:35.736 "name": "BaseBdev2", 00:16:35.736 "uuid": "967d8fe4-9a07-470d-8b8b-e0c8fc424e7c", 00:16:35.736 "is_configured": true, 00:16:35.736 "data_offset": 0, 00:16:35.736 "data_size": 65536 00:16:35.736 }, 00:16:35.736 { 00:16:35.736 "name": "BaseBdev3", 00:16:35.736 "uuid": "4d8da87c-7f7b-4940-b28a-2345b986b1e8", 00:16:35.736 "is_configured": true, 00:16:35.736 "data_offset": 0, 00:16:35.736 "data_size": 65536 00:16:35.736 }, 00:16:35.736 { 00:16:35.736 "name": "BaseBdev4", 00:16:35.736 "uuid": "6effa91f-5a66-4d79-a699-f73b3f8779b9", 00:16:35.736 "is_configured": true, 00:16:35.736 "data_offset": 0, 00:16:35.736 "data_size": 65536 00:16:35.736 } 00:16:35.736 ] 00:16:35.736 } 00:16:35.736 } 00:16:35.736 }' 00:16:35.736 01:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:35.736 BaseBdev2 00:16:35.736 BaseBdev3 00:16:35.736 BaseBdev4' 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.736 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.996 [2024-11-17 01:36:44.307264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.996 [2024-11-17 01:36:44.307339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.996 [2024-11-17 01:36:44.307464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.996 [2024-11-17 01:36:44.307886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.996 [2024-11-17 01:36:44.307946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82503 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82503 ']' 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82503 00:16:35.996 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:35.997 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.997 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82503 00:16:35.997 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.997 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.997 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82503' 00:16:35.997 killing process with pid 82503 00:16:35.997 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82503 00:16:35.997 [2024-11-17 01:36:44.356514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.997 01:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82503 00:16:36.567 [2024-11-17 01:36:44.745304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:37.561 00:16:37.561 real 0m11.605s 00:16:37.561 user 0m18.441s 00:16:37.561 sys 0m2.168s 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.561 ************************************ 00:16:37.561 END TEST raid5f_state_function_test 00:16:37.561 ************************************ 00:16:37.561 01:36:45 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:37.561 01:36:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:37.561 01:36:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.561 01:36:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.561 ************************************ 00:16:37.561 START TEST raid5f_state_function_test_sb 00:16:37.561 ************************************ 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:37.561 Process raid pid: 83169 00:16:37.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83169 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83169' 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83169 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83169 ']' 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.561 01:36:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.561 [2024-11-17 01:36:46.002251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:37.561 [2024-11-17 01:36:46.002433] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.821 [2024-11-17 01:36:46.174824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.080 [2024-11-17 01:36:46.290630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.080 [2024-11-17 01:36:46.482803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.080 [2024-11-17 01:36:46.482931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.651 [2024-11-17 01:36:46.828643] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.651 [2024-11-17 01:36:46.828908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.651 [2024-11-17 01:36:46.828970] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.651 [2024-11-17 01:36:46.829009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.651 [2024-11-17 01:36:46.829138] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.651 [2024-11-17 01:36:46.829186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.651 [2024-11-17 01:36:46.829215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:38.651 [2024-11-17 01:36:46.829314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.651 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.651 "name": "Existed_Raid", 00:16:38.651 "uuid": "88d631ec-4c86-43fb-b926-12bbca57d370", 00:16:38.651 "strip_size_kb": 64, 00:16:38.651 "state": "configuring", 00:16:38.651 "raid_level": "raid5f", 00:16:38.651 "superblock": true, 00:16:38.651 "num_base_bdevs": 4, 00:16:38.651 "num_base_bdevs_discovered": 0, 00:16:38.651 "num_base_bdevs_operational": 4, 00:16:38.651 "base_bdevs_list": [ 00:16:38.651 { 00:16:38.651 "name": "BaseBdev1", 00:16:38.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.651 "is_configured": false, 00:16:38.651 "data_offset": 0, 00:16:38.651 "data_size": 0 00:16:38.651 }, 00:16:38.651 { 00:16:38.651 "name": "BaseBdev2", 00:16:38.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.651 "is_configured": false, 00:16:38.651 "data_offset": 0, 00:16:38.651 "data_size": 0 00:16:38.651 }, 00:16:38.651 { 00:16:38.651 "name": "BaseBdev3", 00:16:38.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.651 "is_configured": false, 00:16:38.651 "data_offset": 0, 00:16:38.651 "data_size": 0 00:16:38.651 }, 00:16:38.652 { 00:16:38.652 "name": "BaseBdev4", 00:16:38.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.652 "is_configured": false, 00:16:38.652 "data_offset": 0, 00:16:38.652 "data_size": 0 00:16:38.652 } 00:16:38.652 ] 00:16:38.652 }' 00:16:38.652 01:36:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.652 01:36:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.911 [2024-11-17 01:36:47.299791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.911 [2024-11-17 01:36:47.299869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.911 [2024-11-17 01:36:47.311777] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.911 [2024-11-17 01:36:47.311864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.911 [2024-11-17 01:36:47.311890] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.911 [2024-11-17 01:36:47.311913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.911 [2024-11-17 01:36:47.311933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.911 [2024-11-17 01:36:47.311953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.911 [2024-11-17 01:36:47.311975] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:38.911 [2024-11-17 01:36:47.312020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.911 [2024-11-17 01:36:47.360635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.911 BaseBdev1 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.911 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.171 [ 00:16:39.171 { 00:16:39.171 "name": "BaseBdev1", 00:16:39.171 "aliases": [ 00:16:39.171 "e3164761-8beb-4bc6-857a-230e27be36d1" 00:16:39.171 ], 00:16:39.171 "product_name": "Malloc disk", 00:16:39.171 "block_size": 512, 00:16:39.171 "num_blocks": 65536, 00:16:39.171 "uuid": "e3164761-8beb-4bc6-857a-230e27be36d1", 00:16:39.171 "assigned_rate_limits": { 00:16:39.171 "rw_ios_per_sec": 0, 00:16:39.171 "rw_mbytes_per_sec": 0, 00:16:39.171 "r_mbytes_per_sec": 0, 00:16:39.171 "w_mbytes_per_sec": 0 00:16:39.171 }, 00:16:39.171 "claimed": true, 00:16:39.171 "claim_type": "exclusive_write", 00:16:39.171 "zoned": false, 00:16:39.171 "supported_io_types": { 00:16:39.171 "read": true, 00:16:39.171 "write": true, 00:16:39.171 "unmap": true, 00:16:39.171 "flush": true, 00:16:39.171 "reset": true, 00:16:39.171 "nvme_admin": false, 00:16:39.171 "nvme_io": false, 00:16:39.171 "nvme_io_md": false, 00:16:39.171 "write_zeroes": true, 00:16:39.171 "zcopy": true, 00:16:39.171 "get_zone_info": false, 00:16:39.171 "zone_management": false, 00:16:39.171 "zone_append": false, 00:16:39.171 "compare": false, 00:16:39.171 "compare_and_write": false, 00:16:39.171 "abort": true, 00:16:39.171 "seek_hole": false, 00:16:39.171 "seek_data": false, 00:16:39.171 "copy": true, 00:16:39.171 "nvme_iov_md": false 00:16:39.171 }, 00:16:39.171 "memory_domains": [ 00:16:39.171 { 00:16:39.171 "dma_device_id": "system", 00:16:39.171 "dma_device_type": 1 00:16:39.171 }, 00:16:39.171 { 00:16:39.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.171 "dma_device_type": 2 00:16:39.171 } 00:16:39.171 ], 00:16:39.171 "driver_specific": {} 00:16:39.171 } 00:16:39.171 ] 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.171 "name": "Existed_Raid", 00:16:39.171 "uuid": "1f1c4814-4ab9-4278-887a-39ee40104b8e", 00:16:39.171 "strip_size_kb": 64, 00:16:39.171 "state": "configuring", 00:16:39.171 "raid_level": "raid5f", 00:16:39.171 "superblock": true, 00:16:39.171 "num_base_bdevs": 4, 00:16:39.171 "num_base_bdevs_discovered": 1, 00:16:39.171 "num_base_bdevs_operational": 4, 00:16:39.171 "base_bdevs_list": [ 00:16:39.171 { 00:16:39.171 "name": "BaseBdev1", 00:16:39.171 "uuid": "e3164761-8beb-4bc6-857a-230e27be36d1", 00:16:39.171 "is_configured": true, 00:16:39.171 "data_offset": 2048, 00:16:39.171 "data_size": 63488 00:16:39.171 }, 00:16:39.171 { 00:16:39.171 "name": "BaseBdev2", 00:16:39.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.171 "is_configured": false, 00:16:39.171 "data_offset": 0, 00:16:39.171 "data_size": 0 00:16:39.171 }, 00:16:39.171 { 00:16:39.171 "name": "BaseBdev3", 00:16:39.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.171 "is_configured": false, 00:16:39.171 "data_offset": 0, 00:16:39.171 "data_size": 0 00:16:39.171 }, 00:16:39.171 { 00:16:39.171 "name": "BaseBdev4", 00:16:39.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.171 "is_configured": false, 00:16:39.171 "data_offset": 0, 00:16:39.171 "data_size": 0 00:16:39.171 } 00:16:39.171 ] 00:16:39.171 }' 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.171 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.432 [2024-11-17 01:36:47.863823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.432 [2024-11-17 01:36:47.863930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.432 [2024-11-17 01:36:47.875872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.432 [2024-11-17 01:36:47.877716] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.432 [2024-11-17 01:36:47.877825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.432 [2024-11-17 01:36:47.877857] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.432 [2024-11-17 01:36:47.877882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.432 [2024-11-17 01:36:47.877904] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:39.432 [2024-11-17 01:36:47.877938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.432 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.692 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.692 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.692 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.692 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.692 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.692 "name": "Existed_Raid", 00:16:39.692 "uuid": "e346f931-0000-4c6c-a11f-b3574944235f", 00:16:39.692 "strip_size_kb": 64, 00:16:39.692 "state": "configuring", 00:16:39.692 "raid_level": "raid5f", 00:16:39.692 "superblock": true, 00:16:39.692 "num_base_bdevs": 4, 00:16:39.692 "num_base_bdevs_discovered": 1, 00:16:39.692 "num_base_bdevs_operational": 4, 00:16:39.692 "base_bdevs_list": [ 00:16:39.692 { 00:16:39.692 "name": "BaseBdev1", 00:16:39.692 "uuid": "e3164761-8beb-4bc6-857a-230e27be36d1", 00:16:39.692 "is_configured": true, 00:16:39.692 "data_offset": 2048, 00:16:39.692 "data_size": 63488 00:16:39.692 }, 00:16:39.692 { 00:16:39.692 "name": "BaseBdev2", 00:16:39.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.692 "is_configured": false, 00:16:39.692 "data_offset": 0, 00:16:39.692 "data_size": 0 00:16:39.692 }, 00:16:39.692 { 00:16:39.692 "name": "BaseBdev3", 00:16:39.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.692 "is_configured": false, 00:16:39.692 "data_offset": 0, 00:16:39.692 "data_size": 0 00:16:39.692 }, 00:16:39.692 { 00:16:39.692 "name": "BaseBdev4", 00:16:39.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.693 "is_configured": false, 00:16:39.693 "data_offset": 0, 00:16:39.693 "data_size": 0 00:16:39.693 } 00:16:39.693 ] 00:16:39.693 }' 00:16:39.693 01:36:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.693 01:36:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.953 [2024-11-17 01:36:48.399183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.953 BaseBdev2 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.953 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.213 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.213 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.213 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.213 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.213 [ 00:16:40.213 { 00:16:40.213 "name": "BaseBdev2", 00:16:40.213 "aliases": [ 00:16:40.213 "9b3e8d6d-b237-47c7-bed8-6811e44b3fd3" 00:16:40.213 ], 00:16:40.213 "product_name": "Malloc disk", 00:16:40.213 "block_size": 512, 00:16:40.213 "num_blocks": 65536, 00:16:40.213 "uuid": "9b3e8d6d-b237-47c7-bed8-6811e44b3fd3", 00:16:40.213 "assigned_rate_limits": { 00:16:40.213 "rw_ios_per_sec": 0, 00:16:40.213 "rw_mbytes_per_sec": 0, 00:16:40.213 "r_mbytes_per_sec": 0, 00:16:40.213 "w_mbytes_per_sec": 0 00:16:40.213 }, 00:16:40.213 "claimed": true, 00:16:40.213 "claim_type": "exclusive_write", 00:16:40.213 "zoned": false, 00:16:40.213 "supported_io_types": { 00:16:40.213 "read": true, 00:16:40.213 "write": true, 00:16:40.213 "unmap": true, 00:16:40.213 "flush": true, 00:16:40.213 "reset": true, 00:16:40.213 "nvme_admin": false, 00:16:40.213 "nvme_io": false, 00:16:40.213 "nvme_io_md": false, 00:16:40.213 "write_zeroes": true, 00:16:40.213 "zcopy": true, 00:16:40.213 "get_zone_info": false, 00:16:40.213 "zone_management": false, 00:16:40.213 "zone_append": false, 00:16:40.213 "compare": false, 00:16:40.213 "compare_and_write": false, 00:16:40.213 "abort": true, 00:16:40.214 "seek_hole": false, 00:16:40.214 "seek_data": false, 00:16:40.214 "copy": true, 00:16:40.214 "nvme_iov_md": false 00:16:40.214 }, 00:16:40.214 "memory_domains": [ 00:16:40.214 { 00:16:40.214 "dma_device_id": "system", 00:16:40.214 "dma_device_type": 1 00:16:40.214 }, 00:16:40.214 { 00:16:40.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.214 "dma_device_type": 2 00:16:40.214 } 00:16:40.214 ], 00:16:40.214 "driver_specific": {} 00:16:40.214 } 00:16:40.214 ] 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.214 "name": "Existed_Raid", 00:16:40.214 "uuid": "e346f931-0000-4c6c-a11f-b3574944235f", 00:16:40.214 "strip_size_kb": 64, 00:16:40.214 "state": "configuring", 00:16:40.214 "raid_level": "raid5f", 00:16:40.214 "superblock": true, 00:16:40.214 "num_base_bdevs": 4, 00:16:40.214 "num_base_bdevs_discovered": 2, 00:16:40.214 "num_base_bdevs_operational": 4, 00:16:40.214 "base_bdevs_list": [ 00:16:40.214 { 00:16:40.214 "name": "BaseBdev1", 00:16:40.214 "uuid": "e3164761-8beb-4bc6-857a-230e27be36d1", 00:16:40.214 "is_configured": true, 00:16:40.214 "data_offset": 2048, 00:16:40.214 "data_size": 63488 00:16:40.214 }, 00:16:40.214 { 00:16:40.214 "name": "BaseBdev2", 00:16:40.214 "uuid": "9b3e8d6d-b237-47c7-bed8-6811e44b3fd3", 00:16:40.214 "is_configured": true, 00:16:40.214 "data_offset": 2048, 00:16:40.214 "data_size": 63488 00:16:40.214 }, 00:16:40.214 { 00:16:40.214 "name": "BaseBdev3", 00:16:40.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.214 "is_configured": false, 00:16:40.214 "data_offset": 0, 00:16:40.214 "data_size": 0 00:16:40.214 }, 00:16:40.214 { 00:16:40.214 "name": "BaseBdev4", 00:16:40.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.214 "is_configured": false, 00:16:40.214 "data_offset": 0, 00:16:40.214 "data_size": 0 00:16:40.214 } 00:16:40.214 ] 00:16:40.214 }' 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.214 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.474 [2024-11-17 01:36:48.914039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.474 BaseBdev3 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.474 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.734 [ 00:16:40.734 { 00:16:40.734 "name": "BaseBdev3", 00:16:40.734 "aliases": [ 00:16:40.734 "45b70ccc-abd0-4cba-87d5-138af54da8f4" 00:16:40.734 ], 00:16:40.734 "product_name": "Malloc disk", 00:16:40.734 "block_size": 512, 00:16:40.734 "num_blocks": 65536, 00:16:40.734 "uuid": "45b70ccc-abd0-4cba-87d5-138af54da8f4", 00:16:40.734 "assigned_rate_limits": { 00:16:40.734 "rw_ios_per_sec": 0, 00:16:40.734 "rw_mbytes_per_sec": 0, 00:16:40.734 "r_mbytes_per_sec": 0, 00:16:40.734 "w_mbytes_per_sec": 0 00:16:40.734 }, 00:16:40.734 "claimed": true, 00:16:40.734 "claim_type": "exclusive_write", 00:16:40.734 "zoned": false, 00:16:40.734 "supported_io_types": { 00:16:40.734 "read": true, 00:16:40.734 "write": true, 00:16:40.734 "unmap": true, 00:16:40.734 "flush": true, 00:16:40.734 "reset": true, 00:16:40.734 "nvme_admin": false, 00:16:40.734 "nvme_io": false, 00:16:40.734 "nvme_io_md": false, 00:16:40.734 "write_zeroes": true, 00:16:40.734 "zcopy": true, 00:16:40.734 "get_zone_info": false, 00:16:40.734 "zone_management": false, 00:16:40.734 "zone_append": false, 00:16:40.734 "compare": false, 00:16:40.734 "compare_and_write": false, 00:16:40.734 "abort": true, 00:16:40.734 "seek_hole": false, 00:16:40.734 "seek_data": false, 00:16:40.734 "copy": true, 00:16:40.734 "nvme_iov_md": false 00:16:40.734 }, 00:16:40.734 "memory_domains": [ 00:16:40.734 { 00:16:40.734 "dma_device_id": "system", 00:16:40.734 "dma_device_type": 1 00:16:40.734 }, 00:16:40.734 { 00:16:40.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.734 "dma_device_type": 2 00:16:40.734 } 00:16:40.734 ], 00:16:40.734 "driver_specific": {} 00:16:40.734 } 00:16:40.734 ] 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.734 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.735 "name": "Existed_Raid", 00:16:40.735 "uuid": "e346f931-0000-4c6c-a11f-b3574944235f", 00:16:40.735 "strip_size_kb": 64, 00:16:40.735 "state": "configuring", 00:16:40.735 "raid_level": "raid5f", 00:16:40.735 "superblock": true, 00:16:40.735 "num_base_bdevs": 4, 00:16:40.735 "num_base_bdevs_discovered": 3, 00:16:40.735 "num_base_bdevs_operational": 4, 00:16:40.735 "base_bdevs_list": [ 00:16:40.735 { 00:16:40.735 "name": "BaseBdev1", 00:16:40.735 "uuid": "e3164761-8beb-4bc6-857a-230e27be36d1", 00:16:40.735 "is_configured": true, 00:16:40.735 "data_offset": 2048, 00:16:40.735 "data_size": 63488 00:16:40.735 }, 00:16:40.735 { 00:16:40.735 "name": "BaseBdev2", 00:16:40.735 "uuid": "9b3e8d6d-b237-47c7-bed8-6811e44b3fd3", 00:16:40.735 "is_configured": true, 00:16:40.735 "data_offset": 2048, 00:16:40.735 "data_size": 63488 00:16:40.735 }, 00:16:40.735 { 00:16:40.735 "name": "BaseBdev3", 00:16:40.735 "uuid": "45b70ccc-abd0-4cba-87d5-138af54da8f4", 00:16:40.735 "is_configured": true, 00:16:40.735 "data_offset": 2048, 00:16:40.735 "data_size": 63488 00:16:40.735 }, 00:16:40.735 { 00:16:40.735 "name": "BaseBdev4", 00:16:40.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.735 "is_configured": false, 00:16:40.735 "data_offset": 0, 00:16:40.735 "data_size": 0 00:16:40.735 } 00:16:40.735 ] 00:16:40.735 }' 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.735 01:36:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 [2024-11-17 01:36:49.395621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:40.994 [2024-11-17 01:36:49.395981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:40.994 [2024-11-17 01:36:49.396038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:40.994 [2024-11-17 01:36:49.396335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:40.994 BaseBdev4 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 [2024-11-17 01:36:49.403822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:40.994 [2024-11-17 01:36:49.403891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:40.994 [2024-11-17 01:36:49.404198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.994 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.994 [ 00:16:40.994 { 00:16:40.994 "name": "BaseBdev4", 00:16:40.994 "aliases": [ 00:16:40.994 "dc0383b2-65c8-40f2-9fad-95bfd053b39b" 00:16:40.994 ], 00:16:40.994 "product_name": "Malloc disk", 00:16:40.994 "block_size": 512, 00:16:40.994 "num_blocks": 65536, 00:16:40.994 "uuid": "dc0383b2-65c8-40f2-9fad-95bfd053b39b", 00:16:40.994 "assigned_rate_limits": { 00:16:40.994 "rw_ios_per_sec": 0, 00:16:40.994 "rw_mbytes_per_sec": 0, 00:16:40.994 "r_mbytes_per_sec": 0, 00:16:40.994 "w_mbytes_per_sec": 0 00:16:40.994 }, 00:16:40.994 "claimed": true, 00:16:40.994 "claim_type": "exclusive_write", 00:16:40.994 "zoned": false, 00:16:40.994 "supported_io_types": { 00:16:40.994 "read": true, 00:16:40.994 "write": true, 00:16:40.994 "unmap": true, 00:16:40.994 "flush": true, 00:16:40.994 "reset": true, 00:16:40.994 "nvme_admin": false, 00:16:40.994 "nvme_io": false, 00:16:40.994 "nvme_io_md": false, 00:16:40.994 "write_zeroes": true, 00:16:40.994 "zcopy": true, 00:16:40.994 "get_zone_info": false, 00:16:40.994 "zone_management": false, 00:16:40.994 "zone_append": false, 00:16:40.994 "compare": false, 00:16:40.994 "compare_and_write": false, 00:16:40.994 "abort": true, 00:16:40.994 "seek_hole": false, 00:16:40.994 "seek_data": false, 00:16:40.994 "copy": true, 00:16:40.994 "nvme_iov_md": false 00:16:40.995 }, 00:16:40.995 "memory_domains": [ 00:16:40.995 { 00:16:40.995 "dma_device_id": "system", 00:16:40.995 "dma_device_type": 1 00:16:40.995 }, 00:16:40.995 { 00:16:40.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.995 "dma_device_type": 2 00:16:40.995 } 00:16:40.995 ], 00:16:40.995 "driver_specific": {} 00:16:40.995 } 00:16:40.995 ] 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.995 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.254 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.254 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.254 "name": "Existed_Raid", 00:16:41.254 "uuid": "e346f931-0000-4c6c-a11f-b3574944235f", 00:16:41.255 "strip_size_kb": 64, 00:16:41.255 "state": "online", 00:16:41.255 "raid_level": "raid5f", 00:16:41.255 "superblock": true, 00:16:41.255 "num_base_bdevs": 4, 00:16:41.255 "num_base_bdevs_discovered": 4, 00:16:41.255 "num_base_bdevs_operational": 4, 00:16:41.255 "base_bdevs_list": [ 00:16:41.255 { 00:16:41.255 "name": "BaseBdev1", 00:16:41.255 "uuid": "e3164761-8beb-4bc6-857a-230e27be36d1", 00:16:41.255 "is_configured": true, 00:16:41.255 "data_offset": 2048, 00:16:41.255 "data_size": 63488 00:16:41.255 }, 00:16:41.255 { 00:16:41.255 "name": "BaseBdev2", 00:16:41.255 "uuid": "9b3e8d6d-b237-47c7-bed8-6811e44b3fd3", 00:16:41.255 "is_configured": true, 00:16:41.255 "data_offset": 2048, 00:16:41.255 "data_size": 63488 00:16:41.255 }, 00:16:41.255 { 00:16:41.255 "name": "BaseBdev3", 00:16:41.255 "uuid": "45b70ccc-abd0-4cba-87d5-138af54da8f4", 00:16:41.255 "is_configured": true, 00:16:41.255 "data_offset": 2048, 00:16:41.255 "data_size": 63488 00:16:41.255 }, 00:16:41.255 { 00:16:41.255 "name": "BaseBdev4", 00:16:41.255 "uuid": "dc0383b2-65c8-40f2-9fad-95bfd053b39b", 00:16:41.255 "is_configured": true, 00:16:41.255 "data_offset": 2048, 00:16:41.255 "data_size": 63488 00:16:41.255 } 00:16:41.255 ] 00:16:41.255 }' 00:16:41.255 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.255 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.514 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:41.514 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:41.514 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:41.514 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:41.514 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:41.514 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:41.514 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:41.515 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.515 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:41.515 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.515 [2024-11-17 01:36:49.943943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.515 01:36:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.774 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:41.774 "name": "Existed_Raid", 00:16:41.774 "aliases": [ 00:16:41.774 "e346f931-0000-4c6c-a11f-b3574944235f" 00:16:41.774 ], 00:16:41.774 "product_name": "Raid Volume", 00:16:41.774 "block_size": 512, 00:16:41.774 "num_blocks": 190464, 00:16:41.774 "uuid": "e346f931-0000-4c6c-a11f-b3574944235f", 00:16:41.774 "assigned_rate_limits": { 00:16:41.774 "rw_ios_per_sec": 0, 00:16:41.774 "rw_mbytes_per_sec": 0, 00:16:41.774 "r_mbytes_per_sec": 0, 00:16:41.774 "w_mbytes_per_sec": 0 00:16:41.774 }, 00:16:41.774 "claimed": false, 00:16:41.774 "zoned": false, 00:16:41.774 "supported_io_types": { 00:16:41.774 "read": true, 00:16:41.774 "write": true, 00:16:41.774 "unmap": false, 00:16:41.774 "flush": false, 00:16:41.774 "reset": true, 00:16:41.774 "nvme_admin": false, 00:16:41.774 "nvme_io": false, 00:16:41.774 "nvme_io_md": false, 00:16:41.774 "write_zeroes": true, 00:16:41.774 "zcopy": false, 00:16:41.774 "get_zone_info": false, 00:16:41.774 "zone_management": false, 00:16:41.774 "zone_append": false, 00:16:41.774 "compare": false, 00:16:41.774 "compare_and_write": false, 00:16:41.774 "abort": false, 00:16:41.774 "seek_hole": false, 00:16:41.774 "seek_data": false, 00:16:41.774 "copy": false, 00:16:41.774 "nvme_iov_md": false 00:16:41.774 }, 00:16:41.774 "driver_specific": { 00:16:41.774 "raid": { 00:16:41.774 "uuid": "e346f931-0000-4c6c-a11f-b3574944235f", 00:16:41.774 "strip_size_kb": 64, 00:16:41.774 "state": "online", 00:16:41.774 "raid_level": "raid5f", 00:16:41.774 "superblock": true, 00:16:41.774 "num_base_bdevs": 4, 00:16:41.774 "num_base_bdevs_discovered": 4, 00:16:41.774 "num_base_bdevs_operational": 4, 00:16:41.774 "base_bdevs_list": [ 00:16:41.774 { 00:16:41.774 "name": "BaseBdev1", 00:16:41.774 "uuid": "e3164761-8beb-4bc6-857a-230e27be36d1", 00:16:41.774 "is_configured": true, 00:16:41.774 "data_offset": 2048, 00:16:41.774 "data_size": 63488 00:16:41.774 }, 00:16:41.774 { 00:16:41.774 "name": "BaseBdev2", 00:16:41.774 "uuid": "9b3e8d6d-b237-47c7-bed8-6811e44b3fd3", 00:16:41.774 "is_configured": true, 00:16:41.774 "data_offset": 2048, 00:16:41.774 "data_size": 63488 00:16:41.774 }, 00:16:41.774 { 00:16:41.774 "name": "BaseBdev3", 00:16:41.775 "uuid": "45b70ccc-abd0-4cba-87d5-138af54da8f4", 00:16:41.775 "is_configured": true, 00:16:41.775 "data_offset": 2048, 00:16:41.775 "data_size": 63488 00:16:41.775 }, 00:16:41.775 { 00:16:41.775 "name": "BaseBdev4", 00:16:41.775 "uuid": "dc0383b2-65c8-40f2-9fad-95bfd053b39b", 00:16:41.775 "is_configured": true, 00:16:41.775 "data_offset": 2048, 00:16:41.775 "data_size": 63488 00:16:41.775 } 00:16:41.775 ] 00:16:41.775 } 00:16:41.775 } 00:16:41.775 }' 00:16:41.775 01:36:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:41.775 BaseBdev2 00:16:41.775 BaseBdev3 00:16:41.775 BaseBdev4' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.775 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.035 [2024-11-17 01:36:50.275240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.035 "name": "Existed_Raid", 00:16:42.035 "uuid": "e346f931-0000-4c6c-a11f-b3574944235f", 00:16:42.035 "strip_size_kb": 64, 00:16:42.035 "state": "online", 00:16:42.035 "raid_level": "raid5f", 00:16:42.035 "superblock": true, 00:16:42.035 "num_base_bdevs": 4, 00:16:42.035 "num_base_bdevs_discovered": 3, 00:16:42.035 "num_base_bdevs_operational": 3, 00:16:42.035 "base_bdevs_list": [ 00:16:42.035 { 00:16:42.035 "name": null, 00:16:42.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.035 "is_configured": false, 00:16:42.035 "data_offset": 0, 00:16:42.035 "data_size": 63488 00:16:42.035 }, 00:16:42.035 { 00:16:42.035 "name": "BaseBdev2", 00:16:42.035 "uuid": "9b3e8d6d-b237-47c7-bed8-6811e44b3fd3", 00:16:42.035 "is_configured": true, 00:16:42.035 "data_offset": 2048, 00:16:42.035 "data_size": 63488 00:16:42.035 }, 00:16:42.035 { 00:16:42.035 "name": "BaseBdev3", 00:16:42.035 "uuid": "45b70ccc-abd0-4cba-87d5-138af54da8f4", 00:16:42.035 "is_configured": true, 00:16:42.035 "data_offset": 2048, 00:16:42.035 "data_size": 63488 00:16:42.035 }, 00:16:42.035 { 00:16:42.035 "name": "BaseBdev4", 00:16:42.035 "uuid": "dc0383b2-65c8-40f2-9fad-95bfd053b39b", 00:16:42.035 "is_configured": true, 00:16:42.035 "data_offset": 2048, 00:16:42.035 "data_size": 63488 00:16:42.035 } 00:16:42.035 ] 00:16:42.035 }' 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.035 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.606 [2024-11-17 01:36:50.899969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:42.606 [2024-11-17 01:36:50.900195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.606 [2024-11-17 01:36:50.992525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.606 01:36:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:42.606 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.606 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:42.606 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:42.606 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:42.606 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.606 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.606 [2024-11-17 01:36:51.052432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.866 [2024-11-17 01:36:51.205255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:42.866 [2024-11-17 01:36:51.205347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.866 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 BaseBdev2 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 [ 00:16:43.127 { 00:16:43.127 "name": "BaseBdev2", 00:16:43.127 "aliases": [ 00:16:43.127 "87266ff8-244f-441a-8f42-09229e9f178d" 00:16:43.127 ], 00:16:43.127 "product_name": "Malloc disk", 00:16:43.127 "block_size": 512, 00:16:43.127 "num_blocks": 65536, 00:16:43.127 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:43.127 "assigned_rate_limits": { 00:16:43.127 "rw_ios_per_sec": 0, 00:16:43.127 "rw_mbytes_per_sec": 0, 00:16:43.127 "r_mbytes_per_sec": 0, 00:16:43.127 "w_mbytes_per_sec": 0 00:16:43.127 }, 00:16:43.127 "claimed": false, 00:16:43.127 "zoned": false, 00:16:43.127 "supported_io_types": { 00:16:43.127 "read": true, 00:16:43.127 "write": true, 00:16:43.127 "unmap": true, 00:16:43.127 "flush": true, 00:16:43.127 "reset": true, 00:16:43.127 "nvme_admin": false, 00:16:43.127 "nvme_io": false, 00:16:43.127 "nvme_io_md": false, 00:16:43.127 "write_zeroes": true, 00:16:43.127 "zcopy": true, 00:16:43.127 "get_zone_info": false, 00:16:43.127 "zone_management": false, 00:16:43.127 "zone_append": false, 00:16:43.127 "compare": false, 00:16:43.127 "compare_and_write": false, 00:16:43.127 "abort": true, 00:16:43.127 "seek_hole": false, 00:16:43.127 "seek_data": false, 00:16:43.127 "copy": true, 00:16:43.127 "nvme_iov_md": false 00:16:43.127 }, 00:16:43.127 "memory_domains": [ 00:16:43.127 { 00:16:43.127 "dma_device_id": "system", 00:16:43.127 "dma_device_type": 1 00:16:43.127 }, 00:16:43.127 { 00:16:43.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.127 "dma_device_type": 2 00:16:43.127 } 00:16:43.127 ], 00:16:43.127 "driver_specific": {} 00:16:43.127 } 00:16:43.127 ] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 BaseBdev3 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 [ 00:16:43.127 { 00:16:43.127 "name": "BaseBdev3", 00:16:43.127 "aliases": [ 00:16:43.127 "0f233ce7-bb6c-413a-8d18-d1c4883ab477" 00:16:43.127 ], 00:16:43.127 "product_name": "Malloc disk", 00:16:43.127 "block_size": 512, 00:16:43.127 "num_blocks": 65536, 00:16:43.127 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:43.127 "assigned_rate_limits": { 00:16:43.127 "rw_ios_per_sec": 0, 00:16:43.127 "rw_mbytes_per_sec": 0, 00:16:43.127 "r_mbytes_per_sec": 0, 00:16:43.127 "w_mbytes_per_sec": 0 00:16:43.127 }, 00:16:43.127 "claimed": false, 00:16:43.127 "zoned": false, 00:16:43.127 "supported_io_types": { 00:16:43.127 "read": true, 00:16:43.127 "write": true, 00:16:43.127 "unmap": true, 00:16:43.127 "flush": true, 00:16:43.127 "reset": true, 00:16:43.127 "nvme_admin": false, 00:16:43.127 "nvme_io": false, 00:16:43.127 "nvme_io_md": false, 00:16:43.127 "write_zeroes": true, 00:16:43.127 "zcopy": true, 00:16:43.127 "get_zone_info": false, 00:16:43.127 "zone_management": false, 00:16:43.127 "zone_append": false, 00:16:43.127 "compare": false, 00:16:43.127 "compare_and_write": false, 00:16:43.127 "abort": true, 00:16:43.127 "seek_hole": false, 00:16:43.127 "seek_data": false, 00:16:43.127 "copy": true, 00:16:43.127 "nvme_iov_md": false 00:16:43.127 }, 00:16:43.127 "memory_domains": [ 00:16:43.127 { 00:16:43.127 "dma_device_id": "system", 00:16:43.127 "dma_device_type": 1 00:16:43.127 }, 00:16:43.127 { 00:16:43.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.127 "dma_device_type": 2 00:16:43.127 } 00:16:43.127 ], 00:16:43.127 "driver_specific": {} 00:16:43.127 } 00:16:43.127 ] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.127 BaseBdev4 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.127 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.128 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.128 [ 00:16:43.128 { 00:16:43.128 "name": "BaseBdev4", 00:16:43.128 "aliases": [ 00:16:43.128 "f3bb548c-7aa9-4fcc-88c1-a816c879d84f" 00:16:43.128 ], 00:16:43.128 "product_name": "Malloc disk", 00:16:43.128 "block_size": 512, 00:16:43.128 "num_blocks": 65536, 00:16:43.128 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:43.128 "assigned_rate_limits": { 00:16:43.128 "rw_ios_per_sec": 0, 00:16:43.128 "rw_mbytes_per_sec": 0, 00:16:43.128 "r_mbytes_per_sec": 0, 00:16:43.128 "w_mbytes_per_sec": 0 00:16:43.128 }, 00:16:43.128 "claimed": false, 00:16:43.128 "zoned": false, 00:16:43.128 "supported_io_types": { 00:16:43.128 "read": true, 00:16:43.128 "write": true, 00:16:43.128 "unmap": true, 00:16:43.387 "flush": true, 00:16:43.387 "reset": true, 00:16:43.388 "nvme_admin": false, 00:16:43.388 "nvme_io": false, 00:16:43.388 "nvme_io_md": false, 00:16:43.388 "write_zeroes": true, 00:16:43.388 "zcopy": true, 00:16:43.388 "get_zone_info": false, 00:16:43.388 "zone_management": false, 00:16:43.388 "zone_append": false, 00:16:43.388 "compare": false, 00:16:43.388 "compare_and_write": false, 00:16:43.388 "abort": true, 00:16:43.388 "seek_hole": false, 00:16:43.388 "seek_data": false, 00:16:43.388 "copy": true, 00:16:43.388 "nvme_iov_md": false 00:16:43.388 }, 00:16:43.388 "memory_domains": [ 00:16:43.388 { 00:16:43.388 "dma_device_id": "system", 00:16:43.388 "dma_device_type": 1 00:16:43.388 }, 00:16:43.388 { 00:16:43.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.388 "dma_device_type": 2 00:16:43.388 } 00:16:43.388 ], 00:16:43.388 "driver_specific": {} 00:16:43.388 } 00:16:43.388 ] 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.388 [2024-11-17 01:36:51.598420] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.388 [2024-11-17 01:36:51.598521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.388 [2024-11-17 01:36:51.598565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.388 [2024-11-17 01:36:51.600380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.388 [2024-11-17 01:36:51.600488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.388 "name": "Existed_Raid", 00:16:43.388 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:43.388 "strip_size_kb": 64, 00:16:43.388 "state": "configuring", 00:16:43.388 "raid_level": "raid5f", 00:16:43.388 "superblock": true, 00:16:43.388 "num_base_bdevs": 4, 00:16:43.388 "num_base_bdevs_discovered": 3, 00:16:43.388 "num_base_bdevs_operational": 4, 00:16:43.388 "base_bdevs_list": [ 00:16:43.388 { 00:16:43.388 "name": "BaseBdev1", 00:16:43.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.388 "is_configured": false, 00:16:43.388 "data_offset": 0, 00:16:43.388 "data_size": 0 00:16:43.388 }, 00:16:43.388 { 00:16:43.388 "name": "BaseBdev2", 00:16:43.388 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:43.388 "is_configured": true, 00:16:43.388 "data_offset": 2048, 00:16:43.388 "data_size": 63488 00:16:43.388 }, 00:16:43.388 { 00:16:43.388 "name": "BaseBdev3", 00:16:43.388 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:43.388 "is_configured": true, 00:16:43.388 "data_offset": 2048, 00:16:43.388 "data_size": 63488 00:16:43.388 }, 00:16:43.388 { 00:16:43.388 "name": "BaseBdev4", 00:16:43.388 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:43.388 "is_configured": true, 00:16:43.388 "data_offset": 2048, 00:16:43.388 "data_size": 63488 00:16:43.388 } 00:16:43.388 ] 00:16:43.388 }' 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.388 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.648 01:36:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:43.648 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.648 01:36:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.648 [2024-11-17 01:36:52.005774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.648 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.649 "name": "Existed_Raid", 00:16:43.649 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:43.649 "strip_size_kb": 64, 00:16:43.649 "state": "configuring", 00:16:43.649 "raid_level": "raid5f", 00:16:43.649 "superblock": true, 00:16:43.649 "num_base_bdevs": 4, 00:16:43.649 "num_base_bdevs_discovered": 2, 00:16:43.649 "num_base_bdevs_operational": 4, 00:16:43.649 "base_bdevs_list": [ 00:16:43.649 { 00:16:43.649 "name": "BaseBdev1", 00:16:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.649 "is_configured": false, 00:16:43.649 "data_offset": 0, 00:16:43.649 "data_size": 0 00:16:43.649 }, 00:16:43.649 { 00:16:43.649 "name": null, 00:16:43.649 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:43.649 "is_configured": false, 00:16:43.649 "data_offset": 0, 00:16:43.649 "data_size": 63488 00:16:43.649 }, 00:16:43.649 { 00:16:43.649 "name": "BaseBdev3", 00:16:43.649 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:43.649 "is_configured": true, 00:16:43.649 "data_offset": 2048, 00:16:43.649 "data_size": 63488 00:16:43.649 }, 00:16:43.649 { 00:16:43.649 "name": "BaseBdev4", 00:16:43.649 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:43.649 "is_configured": true, 00:16:43.649 "data_offset": 2048, 00:16:43.649 "data_size": 63488 00:16:43.649 } 00:16:43.649 ] 00:16:43.649 }' 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.649 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.219 [2024-11-17 01:36:52.558265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.219 BaseBdev1 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.219 [ 00:16:44.219 { 00:16:44.219 "name": "BaseBdev1", 00:16:44.219 "aliases": [ 00:16:44.219 "142ab3cd-ce20-46ef-aa3e-b0baae74691f" 00:16:44.219 ], 00:16:44.219 "product_name": "Malloc disk", 00:16:44.219 "block_size": 512, 00:16:44.219 "num_blocks": 65536, 00:16:44.219 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:44.219 "assigned_rate_limits": { 00:16:44.219 "rw_ios_per_sec": 0, 00:16:44.219 "rw_mbytes_per_sec": 0, 00:16:44.219 "r_mbytes_per_sec": 0, 00:16:44.219 "w_mbytes_per_sec": 0 00:16:44.219 }, 00:16:44.219 "claimed": true, 00:16:44.219 "claim_type": "exclusive_write", 00:16:44.219 "zoned": false, 00:16:44.219 "supported_io_types": { 00:16:44.219 "read": true, 00:16:44.219 "write": true, 00:16:44.219 "unmap": true, 00:16:44.219 "flush": true, 00:16:44.219 "reset": true, 00:16:44.219 "nvme_admin": false, 00:16:44.219 "nvme_io": false, 00:16:44.219 "nvme_io_md": false, 00:16:44.219 "write_zeroes": true, 00:16:44.219 "zcopy": true, 00:16:44.219 "get_zone_info": false, 00:16:44.219 "zone_management": false, 00:16:44.219 "zone_append": false, 00:16:44.219 "compare": false, 00:16:44.219 "compare_and_write": false, 00:16:44.219 "abort": true, 00:16:44.219 "seek_hole": false, 00:16:44.219 "seek_data": false, 00:16:44.219 "copy": true, 00:16:44.219 "nvme_iov_md": false 00:16:44.219 }, 00:16:44.219 "memory_domains": [ 00:16:44.219 { 00:16:44.219 "dma_device_id": "system", 00:16:44.219 "dma_device_type": 1 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.219 "dma_device_type": 2 00:16:44.219 } 00:16:44.219 ], 00:16:44.219 "driver_specific": {} 00:16:44.219 } 00:16:44.219 ] 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.219 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.220 "name": "Existed_Raid", 00:16:44.220 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:44.220 "strip_size_kb": 64, 00:16:44.220 "state": "configuring", 00:16:44.220 "raid_level": "raid5f", 00:16:44.220 "superblock": true, 00:16:44.220 "num_base_bdevs": 4, 00:16:44.220 "num_base_bdevs_discovered": 3, 00:16:44.220 "num_base_bdevs_operational": 4, 00:16:44.220 "base_bdevs_list": [ 00:16:44.220 { 00:16:44.220 "name": "BaseBdev1", 00:16:44.220 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:44.220 "is_configured": true, 00:16:44.220 "data_offset": 2048, 00:16:44.220 "data_size": 63488 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "name": null, 00:16:44.220 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:44.220 "is_configured": false, 00:16:44.220 "data_offset": 0, 00:16:44.220 "data_size": 63488 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "name": "BaseBdev3", 00:16:44.220 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:44.220 "is_configured": true, 00:16:44.220 "data_offset": 2048, 00:16:44.220 "data_size": 63488 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "name": "BaseBdev4", 00:16:44.220 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:44.220 "is_configured": true, 00:16:44.220 "data_offset": 2048, 00:16:44.220 "data_size": 63488 00:16:44.220 } 00:16:44.220 ] 00:16:44.220 }' 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.220 01:36:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.790 [2024-11-17 01:36:53.093416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.790 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.791 "name": "Existed_Raid", 00:16:44.791 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:44.791 "strip_size_kb": 64, 00:16:44.791 "state": "configuring", 00:16:44.791 "raid_level": "raid5f", 00:16:44.791 "superblock": true, 00:16:44.791 "num_base_bdevs": 4, 00:16:44.791 "num_base_bdevs_discovered": 2, 00:16:44.791 "num_base_bdevs_operational": 4, 00:16:44.791 "base_bdevs_list": [ 00:16:44.791 { 00:16:44.791 "name": "BaseBdev1", 00:16:44.791 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:44.791 "is_configured": true, 00:16:44.791 "data_offset": 2048, 00:16:44.791 "data_size": 63488 00:16:44.791 }, 00:16:44.791 { 00:16:44.791 "name": null, 00:16:44.791 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:44.791 "is_configured": false, 00:16:44.791 "data_offset": 0, 00:16:44.791 "data_size": 63488 00:16:44.791 }, 00:16:44.791 { 00:16:44.791 "name": null, 00:16:44.791 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:44.791 "is_configured": false, 00:16:44.791 "data_offset": 0, 00:16:44.791 "data_size": 63488 00:16:44.791 }, 00:16:44.791 { 00:16:44.791 "name": "BaseBdev4", 00:16:44.791 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:44.791 "is_configured": true, 00:16:44.791 "data_offset": 2048, 00:16:44.791 "data_size": 63488 00:16:44.791 } 00:16:44.791 ] 00:16:44.791 }' 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.791 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.361 [2024-11-17 01:36:53.636471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.361 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.362 "name": "Existed_Raid", 00:16:45.362 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:45.362 "strip_size_kb": 64, 00:16:45.362 "state": "configuring", 00:16:45.362 "raid_level": "raid5f", 00:16:45.362 "superblock": true, 00:16:45.362 "num_base_bdevs": 4, 00:16:45.362 "num_base_bdevs_discovered": 3, 00:16:45.362 "num_base_bdevs_operational": 4, 00:16:45.362 "base_bdevs_list": [ 00:16:45.362 { 00:16:45.362 "name": "BaseBdev1", 00:16:45.362 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:45.362 "is_configured": true, 00:16:45.362 "data_offset": 2048, 00:16:45.362 "data_size": 63488 00:16:45.362 }, 00:16:45.362 { 00:16:45.362 "name": null, 00:16:45.362 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:45.362 "is_configured": false, 00:16:45.362 "data_offset": 0, 00:16:45.362 "data_size": 63488 00:16:45.362 }, 00:16:45.362 { 00:16:45.362 "name": "BaseBdev3", 00:16:45.362 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:45.362 "is_configured": true, 00:16:45.362 "data_offset": 2048, 00:16:45.362 "data_size": 63488 00:16:45.362 }, 00:16:45.362 { 00:16:45.362 "name": "BaseBdev4", 00:16:45.362 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:45.362 "is_configured": true, 00:16:45.362 "data_offset": 2048, 00:16:45.362 "data_size": 63488 00:16:45.362 } 00:16:45.362 ] 00:16:45.362 }' 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.362 01:36:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.930 [2024-11-17 01:36:54.139617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.930 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.930 "name": "Existed_Raid", 00:16:45.930 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:45.930 "strip_size_kb": 64, 00:16:45.930 "state": "configuring", 00:16:45.930 "raid_level": "raid5f", 00:16:45.930 "superblock": true, 00:16:45.930 "num_base_bdevs": 4, 00:16:45.930 "num_base_bdevs_discovered": 2, 00:16:45.930 "num_base_bdevs_operational": 4, 00:16:45.930 "base_bdevs_list": [ 00:16:45.930 { 00:16:45.930 "name": null, 00:16:45.930 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:45.930 "is_configured": false, 00:16:45.930 "data_offset": 0, 00:16:45.930 "data_size": 63488 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "name": null, 00:16:45.930 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:45.930 "is_configured": false, 00:16:45.930 "data_offset": 0, 00:16:45.930 "data_size": 63488 00:16:45.930 }, 00:16:45.930 { 00:16:45.930 "name": "BaseBdev3", 00:16:45.930 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:45.930 "is_configured": true, 00:16:45.930 "data_offset": 2048, 00:16:45.930 "data_size": 63488 00:16:45.930 }, 00:16:45.930 { 00:16:45.931 "name": "BaseBdev4", 00:16:45.931 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:45.931 "is_configured": true, 00:16:45.931 "data_offset": 2048, 00:16:45.931 "data_size": 63488 00:16:45.931 } 00:16:45.931 ] 00:16:45.931 }' 00:16:45.931 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.931 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 [2024-11-17 01:36:54.746750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.511 "name": "Existed_Raid", 00:16:46.511 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:46.511 "strip_size_kb": 64, 00:16:46.511 "state": "configuring", 00:16:46.511 "raid_level": "raid5f", 00:16:46.511 "superblock": true, 00:16:46.511 "num_base_bdevs": 4, 00:16:46.511 "num_base_bdevs_discovered": 3, 00:16:46.511 "num_base_bdevs_operational": 4, 00:16:46.511 "base_bdevs_list": [ 00:16:46.511 { 00:16:46.511 "name": null, 00:16:46.511 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:46.511 "is_configured": false, 00:16:46.511 "data_offset": 0, 00:16:46.511 "data_size": 63488 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev2", 00:16:46.511 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 2048, 00:16:46.511 "data_size": 63488 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev3", 00:16:46.511 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 2048, 00:16:46.511 "data_size": 63488 00:16:46.511 }, 00:16:46.511 { 00:16:46.511 "name": "BaseBdev4", 00:16:46.511 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:46.511 "is_configured": true, 00:16:46.511 "data_offset": 2048, 00:16:46.511 "data_size": 63488 00:16:46.511 } 00:16:46.511 ] 00:16:46.511 }' 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.511 01:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.771 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 142ab3cd-ce20-46ef-aa3e-b0baae74691f 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.032 [2024-11-17 01:36:55.272712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:47.032 [2024-11-17 01:36:55.273054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:47.032 [2024-11-17 01:36:55.273111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:47.032 [2024-11-17 01:36:55.273387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:47.032 NewBaseBdev 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.032 [2024-11-17 01:36:55.280145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:47.032 [2024-11-17 01:36:55.280207] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:47.032 [2024-11-17 01:36:55.280481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.032 [ 00:16:47.032 { 00:16:47.032 "name": "NewBaseBdev", 00:16:47.032 "aliases": [ 00:16:47.032 "142ab3cd-ce20-46ef-aa3e-b0baae74691f" 00:16:47.032 ], 00:16:47.032 "product_name": "Malloc disk", 00:16:47.032 "block_size": 512, 00:16:47.032 "num_blocks": 65536, 00:16:47.032 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:47.032 "assigned_rate_limits": { 00:16:47.032 "rw_ios_per_sec": 0, 00:16:47.032 "rw_mbytes_per_sec": 0, 00:16:47.032 "r_mbytes_per_sec": 0, 00:16:47.032 "w_mbytes_per_sec": 0 00:16:47.032 }, 00:16:47.032 "claimed": true, 00:16:47.032 "claim_type": "exclusive_write", 00:16:47.032 "zoned": false, 00:16:47.032 "supported_io_types": { 00:16:47.032 "read": true, 00:16:47.032 "write": true, 00:16:47.032 "unmap": true, 00:16:47.032 "flush": true, 00:16:47.032 "reset": true, 00:16:47.032 "nvme_admin": false, 00:16:47.032 "nvme_io": false, 00:16:47.032 "nvme_io_md": false, 00:16:47.032 "write_zeroes": true, 00:16:47.032 "zcopy": true, 00:16:47.032 "get_zone_info": false, 00:16:47.032 "zone_management": false, 00:16:47.032 "zone_append": false, 00:16:47.032 "compare": false, 00:16:47.032 "compare_and_write": false, 00:16:47.032 "abort": true, 00:16:47.032 "seek_hole": false, 00:16:47.032 "seek_data": false, 00:16:47.032 "copy": true, 00:16:47.032 "nvme_iov_md": false 00:16:47.032 }, 00:16:47.032 "memory_domains": [ 00:16:47.032 { 00:16:47.032 "dma_device_id": "system", 00:16:47.032 "dma_device_type": 1 00:16:47.032 }, 00:16:47.032 { 00:16:47.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.032 "dma_device_type": 2 00:16:47.032 } 00:16:47.032 ], 00:16:47.032 "driver_specific": {} 00:16:47.032 } 00:16:47.032 ] 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.032 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.033 "name": "Existed_Raid", 00:16:47.033 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:47.033 "strip_size_kb": 64, 00:16:47.033 "state": "online", 00:16:47.033 "raid_level": "raid5f", 00:16:47.033 "superblock": true, 00:16:47.033 "num_base_bdevs": 4, 00:16:47.033 "num_base_bdevs_discovered": 4, 00:16:47.033 "num_base_bdevs_operational": 4, 00:16:47.033 "base_bdevs_list": [ 00:16:47.033 { 00:16:47.033 "name": "NewBaseBdev", 00:16:47.033 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:47.033 "is_configured": true, 00:16:47.033 "data_offset": 2048, 00:16:47.033 "data_size": 63488 00:16:47.033 }, 00:16:47.033 { 00:16:47.033 "name": "BaseBdev2", 00:16:47.033 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:47.033 "is_configured": true, 00:16:47.033 "data_offset": 2048, 00:16:47.033 "data_size": 63488 00:16:47.033 }, 00:16:47.033 { 00:16:47.033 "name": "BaseBdev3", 00:16:47.033 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:47.033 "is_configured": true, 00:16:47.033 "data_offset": 2048, 00:16:47.033 "data_size": 63488 00:16:47.033 }, 00:16:47.033 { 00:16:47.033 "name": "BaseBdev4", 00:16:47.033 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:47.033 "is_configured": true, 00:16:47.033 "data_offset": 2048, 00:16:47.033 "data_size": 63488 00:16:47.033 } 00:16:47.033 ] 00:16:47.033 }' 00:16:47.033 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.033 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.293 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.293 [2024-11-17 01:36:55.739810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.554 "name": "Existed_Raid", 00:16:47.554 "aliases": [ 00:16:47.554 "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459" 00:16:47.554 ], 00:16:47.554 "product_name": "Raid Volume", 00:16:47.554 "block_size": 512, 00:16:47.554 "num_blocks": 190464, 00:16:47.554 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:47.554 "assigned_rate_limits": { 00:16:47.554 "rw_ios_per_sec": 0, 00:16:47.554 "rw_mbytes_per_sec": 0, 00:16:47.554 "r_mbytes_per_sec": 0, 00:16:47.554 "w_mbytes_per_sec": 0 00:16:47.554 }, 00:16:47.554 "claimed": false, 00:16:47.554 "zoned": false, 00:16:47.554 "supported_io_types": { 00:16:47.554 "read": true, 00:16:47.554 "write": true, 00:16:47.554 "unmap": false, 00:16:47.554 "flush": false, 00:16:47.554 "reset": true, 00:16:47.554 "nvme_admin": false, 00:16:47.554 "nvme_io": false, 00:16:47.554 "nvme_io_md": false, 00:16:47.554 "write_zeroes": true, 00:16:47.554 "zcopy": false, 00:16:47.554 "get_zone_info": false, 00:16:47.554 "zone_management": false, 00:16:47.554 "zone_append": false, 00:16:47.554 "compare": false, 00:16:47.554 "compare_and_write": false, 00:16:47.554 "abort": false, 00:16:47.554 "seek_hole": false, 00:16:47.554 "seek_data": false, 00:16:47.554 "copy": false, 00:16:47.554 "nvme_iov_md": false 00:16:47.554 }, 00:16:47.554 "driver_specific": { 00:16:47.554 "raid": { 00:16:47.554 "uuid": "ae9cfebc-6c8a-42b7-9ccc-6b2065f38459", 00:16:47.554 "strip_size_kb": 64, 00:16:47.554 "state": "online", 00:16:47.554 "raid_level": "raid5f", 00:16:47.554 "superblock": true, 00:16:47.554 "num_base_bdevs": 4, 00:16:47.554 "num_base_bdevs_discovered": 4, 00:16:47.554 "num_base_bdevs_operational": 4, 00:16:47.554 "base_bdevs_list": [ 00:16:47.554 { 00:16:47.554 "name": "NewBaseBdev", 00:16:47.554 "uuid": "142ab3cd-ce20-46ef-aa3e-b0baae74691f", 00:16:47.554 "is_configured": true, 00:16:47.554 "data_offset": 2048, 00:16:47.554 "data_size": 63488 00:16:47.554 }, 00:16:47.554 { 00:16:47.554 "name": "BaseBdev2", 00:16:47.554 "uuid": "87266ff8-244f-441a-8f42-09229e9f178d", 00:16:47.554 "is_configured": true, 00:16:47.554 "data_offset": 2048, 00:16:47.554 "data_size": 63488 00:16:47.554 }, 00:16:47.554 { 00:16:47.554 "name": "BaseBdev3", 00:16:47.554 "uuid": "0f233ce7-bb6c-413a-8d18-d1c4883ab477", 00:16:47.554 "is_configured": true, 00:16:47.554 "data_offset": 2048, 00:16:47.554 "data_size": 63488 00:16:47.554 }, 00:16:47.554 { 00:16:47.554 "name": "BaseBdev4", 00:16:47.554 "uuid": "f3bb548c-7aa9-4fcc-88c1-a816c879d84f", 00:16:47.554 "is_configured": true, 00:16:47.554 "data_offset": 2048, 00:16:47.554 "data_size": 63488 00:16:47.554 } 00:16:47.554 ] 00:16:47.554 } 00:16:47.554 } 00:16:47.554 }' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:47.554 BaseBdev2 00:16:47.554 BaseBdev3 00:16:47.554 BaseBdev4' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.554 01:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.814 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.815 [2024-11-17 01:36:56.091030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.815 [2024-11-17 01:36:56.091098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.815 [2024-11-17 01:36:56.091217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.815 [2024-11-17 01:36:56.091526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.815 [2024-11-17 01:36:56.091589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83169 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83169 ']' 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83169 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83169 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83169' 00:16:47.815 killing process with pid 83169 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83169 00:16:47.815 [2024-11-17 01:36:56.138939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.815 01:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83169 00:16:48.384 [2024-11-17 01:36:56.545618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.396 01:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:49.396 00:16:49.396 real 0m11.845s 00:16:49.396 user 0m18.763s 00:16:49.396 sys 0m2.224s 00:16:49.396 01:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.396 ************************************ 00:16:49.396 END TEST raid5f_state_function_test_sb 00:16:49.396 ************************************ 00:16:49.396 01:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.396 01:36:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:49.396 01:36:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:49.396 01:36:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.396 01:36:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.396 ************************************ 00:16:49.396 START TEST raid5f_superblock_test 00:16:49.396 ************************************ 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83850 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83850 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83850 ']' 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.396 01:36:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.656 [2024-11-17 01:36:57.913403] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:49.656 [2024-11-17 01:36:57.913611] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83850 ] 00:16:49.656 [2024-11-17 01:36:58.103522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.916 [2024-11-17 01:36:58.213517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.176 [2024-11-17 01:36:58.408425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.176 [2024-11-17 01:36:58.408476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.436 malloc1 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.436 [2024-11-17 01:36:58.786148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:50.436 [2024-11-17 01:36:58.786277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.436 [2024-11-17 01:36:58.786316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.436 [2024-11-17 01:36:58.786344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.436 [2024-11-17 01:36:58.788429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.436 [2024-11-17 01:36:58.788510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:50.436 pt1 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:50.436 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 malloc2 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 [2024-11-17 01:36:58.842919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.437 [2024-11-17 01:36:58.843020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.437 [2024-11-17 01:36:58.843056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.437 [2024-11-17 01:36:58.843083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.437 [2024-11-17 01:36:58.845117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.437 [2024-11-17 01:36:58.845179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.437 pt2 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.698 malloc3 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.698 [2024-11-17 01:36:58.922019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.698 [2024-11-17 01:36:58.922113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.698 [2024-11-17 01:36:58.922149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:50.698 [2024-11-17 01:36:58.922175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.698 [2024-11-17 01:36:58.924172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.698 [2024-11-17 01:36:58.924241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.698 pt3 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.698 malloc4 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.698 [2024-11-17 01:36:58.979215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:50.698 [2024-11-17 01:36:58.979302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.698 [2024-11-17 01:36:58.979336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:50.698 [2024-11-17 01:36:58.979363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.698 [2024-11-17 01:36:58.981344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.698 [2024-11-17 01:36:58.981417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:50.698 pt4 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.698 01:36:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.698 [2024-11-17 01:36:58.991232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.698 [2024-11-17 01:36:58.992990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.699 [2024-11-17 01:36:58.993099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.699 [2024-11-17 01:36:58.993177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:50.699 [2024-11-17 01:36:58.993422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.699 [2024-11-17 01:36:58.993471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.699 [2024-11-17 01:36:58.993712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:50.699 [2024-11-17 01:36:59.000979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.699 [2024-11-17 01:36:59.001043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.699 [2024-11-17 01:36:59.001284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.699 "name": "raid_bdev1", 00:16:50.699 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:50.699 "strip_size_kb": 64, 00:16:50.699 "state": "online", 00:16:50.699 "raid_level": "raid5f", 00:16:50.699 "superblock": true, 00:16:50.699 "num_base_bdevs": 4, 00:16:50.699 "num_base_bdevs_discovered": 4, 00:16:50.699 "num_base_bdevs_operational": 4, 00:16:50.699 "base_bdevs_list": [ 00:16:50.699 { 00:16:50.699 "name": "pt1", 00:16:50.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.699 "is_configured": true, 00:16:50.699 "data_offset": 2048, 00:16:50.699 "data_size": 63488 00:16:50.699 }, 00:16:50.699 { 00:16:50.699 "name": "pt2", 00:16:50.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.699 "is_configured": true, 00:16:50.699 "data_offset": 2048, 00:16:50.699 "data_size": 63488 00:16:50.699 }, 00:16:50.699 { 00:16:50.699 "name": "pt3", 00:16:50.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.699 "is_configured": true, 00:16:50.699 "data_offset": 2048, 00:16:50.699 "data_size": 63488 00:16:50.699 }, 00:16:50.699 { 00:16:50.699 "name": "pt4", 00:16:50.699 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.699 "is_configured": true, 00:16:50.699 "data_offset": 2048, 00:16:50.699 "data_size": 63488 00:16:50.699 } 00:16:50.699 ] 00:16:50.699 }' 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.699 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.284 [2024-11-17 01:36:59.456761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.284 "name": "raid_bdev1", 00:16:51.284 "aliases": [ 00:16:51.284 "a87efff2-44fc-41dc-ab67-1ddd82a48f53" 00:16:51.284 ], 00:16:51.284 "product_name": "Raid Volume", 00:16:51.284 "block_size": 512, 00:16:51.284 "num_blocks": 190464, 00:16:51.284 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:51.284 "assigned_rate_limits": { 00:16:51.284 "rw_ios_per_sec": 0, 00:16:51.284 "rw_mbytes_per_sec": 0, 00:16:51.284 "r_mbytes_per_sec": 0, 00:16:51.284 "w_mbytes_per_sec": 0 00:16:51.284 }, 00:16:51.284 "claimed": false, 00:16:51.284 "zoned": false, 00:16:51.284 "supported_io_types": { 00:16:51.284 "read": true, 00:16:51.284 "write": true, 00:16:51.284 "unmap": false, 00:16:51.284 "flush": false, 00:16:51.284 "reset": true, 00:16:51.284 "nvme_admin": false, 00:16:51.284 "nvme_io": false, 00:16:51.284 "nvme_io_md": false, 00:16:51.284 "write_zeroes": true, 00:16:51.284 "zcopy": false, 00:16:51.284 "get_zone_info": false, 00:16:51.284 "zone_management": false, 00:16:51.284 "zone_append": false, 00:16:51.284 "compare": false, 00:16:51.284 "compare_and_write": false, 00:16:51.284 "abort": false, 00:16:51.284 "seek_hole": false, 00:16:51.284 "seek_data": false, 00:16:51.284 "copy": false, 00:16:51.284 "nvme_iov_md": false 00:16:51.284 }, 00:16:51.284 "driver_specific": { 00:16:51.284 "raid": { 00:16:51.284 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:51.284 "strip_size_kb": 64, 00:16:51.284 "state": "online", 00:16:51.284 "raid_level": "raid5f", 00:16:51.284 "superblock": true, 00:16:51.284 "num_base_bdevs": 4, 00:16:51.284 "num_base_bdevs_discovered": 4, 00:16:51.284 "num_base_bdevs_operational": 4, 00:16:51.284 "base_bdevs_list": [ 00:16:51.284 { 00:16:51.284 "name": "pt1", 00:16:51.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.284 "is_configured": true, 00:16:51.284 "data_offset": 2048, 00:16:51.284 "data_size": 63488 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "name": "pt2", 00:16:51.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.284 "is_configured": true, 00:16:51.284 "data_offset": 2048, 00:16:51.284 "data_size": 63488 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "name": "pt3", 00:16:51.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.284 "is_configured": true, 00:16:51.284 "data_offset": 2048, 00:16:51.284 "data_size": 63488 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "name": "pt4", 00:16:51.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.284 "is_configured": true, 00:16:51.284 "data_offset": 2048, 00:16:51.284 "data_size": 63488 00:16:51.284 } 00:16:51.284 ] 00:16:51.284 } 00:16:51.284 } 00:16:51.284 }' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:51.284 pt2 00:16:51.284 pt3 00:16:51.284 pt4' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.284 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 [2024-11-17 01:36:59.788222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a87efff2-44fc-41dc-ab67-1ddd82a48f53 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a87efff2-44fc-41dc-ab67-1ddd82a48f53 ']' 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 [2024-11-17 01:36:59.827983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.544 [2024-11-17 01:36:59.828043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.544 [2024-11-17 01:36:59.828114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.544 [2024-11-17 01:36:59.828189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.544 [2024-11-17 01:36:59.828202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.544 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.545 01:36:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.545 [2024-11-17 01:36:59.995711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:51.545 [2024-11-17 01:36:59.997485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:51.545 [2024-11-17 01:36:59.997584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:51.545 [2024-11-17 01:36:59.997634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:51.545 [2024-11-17 01:36:59.997708] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:51.545 [2024-11-17 01:36:59.997786] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:51.545 [2024-11-17 01:36:59.997842] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:51.545 [2024-11-17 01:36:59.997903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:51.545 [2024-11-17 01:36:59.997952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.545 [2024-11-17 01:36:59.997983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:51.545 request: 00:16:51.805 { 00:16:51.805 "name": "raid_bdev1", 00:16:51.805 "raid_level": "raid5f", 00:16:51.805 "base_bdevs": [ 00:16:51.805 "malloc1", 00:16:51.805 "malloc2", 00:16:51.805 "malloc3", 00:16:51.805 "malloc4" 00:16:51.805 ], 00:16:51.805 "strip_size_kb": 64, 00:16:51.805 "superblock": false, 00:16:51.805 "method": "bdev_raid_create", 00:16:51.805 "req_id": 1 00:16:51.805 } 00:16:51.805 Got JSON-RPC error response 00:16:51.805 response: 00:16:51.805 { 00:16:51.805 "code": -17, 00:16:51.805 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:51.805 } 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.805 [2024-11-17 01:37:00.063568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.805 [2024-11-17 01:37:00.063649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.805 [2024-11-17 01:37:00.063679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:51.805 [2024-11-17 01:37:00.063707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.805 [2024-11-17 01:37:00.065982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.805 [2024-11-17 01:37:00.066060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.805 [2024-11-17 01:37:00.066164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:51.805 [2024-11-17 01:37:00.066240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.805 pt1 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.805 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.805 "name": "raid_bdev1", 00:16:51.805 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:51.805 "strip_size_kb": 64, 00:16:51.805 "state": "configuring", 00:16:51.805 "raid_level": "raid5f", 00:16:51.805 "superblock": true, 00:16:51.805 "num_base_bdevs": 4, 00:16:51.805 "num_base_bdevs_discovered": 1, 00:16:51.805 "num_base_bdevs_operational": 4, 00:16:51.805 "base_bdevs_list": [ 00:16:51.805 { 00:16:51.805 "name": "pt1", 00:16:51.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.805 "is_configured": true, 00:16:51.805 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 }, 00:16:51.806 { 00:16:51.806 "name": null, 00:16:51.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.806 "is_configured": false, 00:16:51.806 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 }, 00:16:51.806 { 00:16:51.806 "name": null, 00:16:51.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.806 "is_configured": false, 00:16:51.806 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 }, 00:16:51.806 { 00:16:51.806 "name": null, 00:16:51.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.806 "is_configured": false, 00:16:51.806 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 } 00:16:51.806 ] 00:16:51.806 }' 00:16:51.806 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.806 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.066 [2024-11-17 01:37:00.478879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.066 [2024-11-17 01:37:00.478987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.066 [2024-11-17 01:37:00.479019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:52.066 [2024-11-17 01:37:00.479047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.066 [2024-11-17 01:37:00.479418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.066 [2024-11-17 01:37:00.479476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.066 [2024-11-17 01:37:00.479559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:52.066 [2024-11-17 01:37:00.479606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.066 pt2 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.066 [2024-11-17 01:37:00.490883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.066 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.326 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.326 "name": "raid_bdev1", 00:16:52.326 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:52.326 "strip_size_kb": 64, 00:16:52.326 "state": "configuring", 00:16:52.326 "raid_level": "raid5f", 00:16:52.326 "superblock": true, 00:16:52.326 "num_base_bdevs": 4, 00:16:52.326 "num_base_bdevs_discovered": 1, 00:16:52.326 "num_base_bdevs_operational": 4, 00:16:52.326 "base_bdevs_list": [ 00:16:52.326 { 00:16:52.326 "name": "pt1", 00:16:52.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.326 "is_configured": true, 00:16:52.326 "data_offset": 2048, 00:16:52.326 "data_size": 63488 00:16:52.326 }, 00:16:52.326 { 00:16:52.326 "name": null, 00:16:52.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.326 "is_configured": false, 00:16:52.326 "data_offset": 0, 00:16:52.326 "data_size": 63488 00:16:52.326 }, 00:16:52.326 { 00:16:52.326 "name": null, 00:16:52.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.326 "is_configured": false, 00:16:52.326 "data_offset": 2048, 00:16:52.326 "data_size": 63488 00:16:52.326 }, 00:16:52.326 { 00:16:52.326 "name": null, 00:16:52.326 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.326 "is_configured": false, 00:16:52.326 "data_offset": 2048, 00:16:52.326 "data_size": 63488 00:16:52.326 } 00:16:52.326 ] 00:16:52.326 }' 00:16:52.326 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.326 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.587 [2024-11-17 01:37:00.958083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.587 [2024-11-17 01:37:00.958190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.587 [2024-11-17 01:37:00.958223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:52.587 [2024-11-17 01:37:00.958249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.587 [2024-11-17 01:37:00.958690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.587 [2024-11-17 01:37:00.958743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.587 [2024-11-17 01:37:00.958837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:52.587 [2024-11-17 01:37:00.958859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.587 pt2 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.587 [2024-11-17 01:37:00.970033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.587 [2024-11-17 01:37:00.970137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.587 [2024-11-17 01:37:00.970168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:52.587 [2024-11-17 01:37:00.970193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.587 [2024-11-17 01:37:00.970547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.587 [2024-11-17 01:37:00.970599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.587 [2024-11-17 01:37:00.970681] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:52.587 [2024-11-17 01:37:00.970724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.587 pt3 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.587 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.587 [2024-11-17 01:37:00.981986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:52.587 [2024-11-17 01:37:00.982079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.587 [2024-11-17 01:37:00.982111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:52.588 [2024-11-17 01:37:00.982136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.588 [2024-11-17 01:37:00.982486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.588 [2024-11-17 01:37:00.982535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:52.588 [2024-11-17 01:37:00.982598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:52.588 [2024-11-17 01:37:00.982616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:52.588 [2024-11-17 01:37:00.982738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:52.588 [2024-11-17 01:37:00.982746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:52.588 [2024-11-17 01:37:00.982977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:52.588 [2024-11-17 01:37:00.989683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:52.588 [2024-11-17 01:37:00.989736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:52.588 [2024-11-17 01:37:00.989960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.588 pt4 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.588 01:37:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.588 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.588 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.588 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.588 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.588 "name": "raid_bdev1", 00:16:52.588 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:52.588 "strip_size_kb": 64, 00:16:52.588 "state": "online", 00:16:52.588 "raid_level": "raid5f", 00:16:52.588 "superblock": true, 00:16:52.588 "num_base_bdevs": 4, 00:16:52.588 "num_base_bdevs_discovered": 4, 00:16:52.588 "num_base_bdevs_operational": 4, 00:16:52.588 "base_bdevs_list": [ 00:16:52.588 { 00:16:52.588 "name": "pt1", 00:16:52.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.588 "is_configured": true, 00:16:52.588 "data_offset": 2048, 00:16:52.588 "data_size": 63488 00:16:52.588 }, 00:16:52.588 { 00:16:52.588 "name": "pt2", 00:16:52.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.588 "is_configured": true, 00:16:52.588 "data_offset": 2048, 00:16:52.588 "data_size": 63488 00:16:52.588 }, 00:16:52.588 { 00:16:52.588 "name": "pt3", 00:16:52.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.588 "is_configured": true, 00:16:52.588 "data_offset": 2048, 00:16:52.588 "data_size": 63488 00:16:52.588 }, 00:16:52.588 { 00:16:52.588 "name": "pt4", 00:16:52.588 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.588 "is_configured": true, 00:16:52.588 "data_offset": 2048, 00:16:52.588 "data_size": 63488 00:16:52.588 } 00:16:52.588 ] 00:16:52.588 }' 00:16:52.588 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.588 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.158 [2024-11-17 01:37:01.433687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.158 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:53.158 "name": "raid_bdev1", 00:16:53.158 "aliases": [ 00:16:53.158 "a87efff2-44fc-41dc-ab67-1ddd82a48f53" 00:16:53.158 ], 00:16:53.158 "product_name": "Raid Volume", 00:16:53.158 "block_size": 512, 00:16:53.158 "num_blocks": 190464, 00:16:53.158 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:53.158 "assigned_rate_limits": { 00:16:53.158 "rw_ios_per_sec": 0, 00:16:53.158 "rw_mbytes_per_sec": 0, 00:16:53.158 "r_mbytes_per_sec": 0, 00:16:53.158 "w_mbytes_per_sec": 0 00:16:53.158 }, 00:16:53.158 "claimed": false, 00:16:53.158 "zoned": false, 00:16:53.158 "supported_io_types": { 00:16:53.158 "read": true, 00:16:53.158 "write": true, 00:16:53.158 "unmap": false, 00:16:53.158 "flush": false, 00:16:53.158 "reset": true, 00:16:53.158 "nvme_admin": false, 00:16:53.158 "nvme_io": false, 00:16:53.158 "nvme_io_md": false, 00:16:53.158 "write_zeroes": true, 00:16:53.158 "zcopy": false, 00:16:53.158 "get_zone_info": false, 00:16:53.158 "zone_management": false, 00:16:53.158 "zone_append": false, 00:16:53.158 "compare": false, 00:16:53.158 "compare_and_write": false, 00:16:53.158 "abort": false, 00:16:53.158 "seek_hole": false, 00:16:53.158 "seek_data": false, 00:16:53.158 "copy": false, 00:16:53.158 "nvme_iov_md": false 00:16:53.158 }, 00:16:53.158 "driver_specific": { 00:16:53.158 "raid": { 00:16:53.158 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:53.158 "strip_size_kb": 64, 00:16:53.158 "state": "online", 00:16:53.158 "raid_level": "raid5f", 00:16:53.158 "superblock": true, 00:16:53.158 "num_base_bdevs": 4, 00:16:53.158 "num_base_bdevs_discovered": 4, 00:16:53.158 "num_base_bdevs_operational": 4, 00:16:53.158 "base_bdevs_list": [ 00:16:53.158 { 00:16:53.158 "name": "pt1", 00:16:53.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:53.158 "is_configured": true, 00:16:53.158 "data_offset": 2048, 00:16:53.158 "data_size": 63488 00:16:53.158 }, 00:16:53.158 { 00:16:53.158 "name": "pt2", 00:16:53.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.158 "is_configured": true, 00:16:53.158 "data_offset": 2048, 00:16:53.158 "data_size": 63488 00:16:53.158 }, 00:16:53.158 { 00:16:53.158 "name": "pt3", 00:16:53.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.158 "is_configured": true, 00:16:53.158 "data_offset": 2048, 00:16:53.158 "data_size": 63488 00:16:53.158 }, 00:16:53.158 { 00:16:53.158 "name": "pt4", 00:16:53.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.158 "is_configured": true, 00:16:53.158 "data_offset": 2048, 00:16:53.158 "data_size": 63488 00:16:53.158 } 00:16:53.158 ] 00:16:53.158 } 00:16:53.158 } 00:16:53.158 }' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:53.159 pt2 00:16:53.159 pt3 00:16:53.159 pt4' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.159 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.419 [2024-11-17 01:37:01.753093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a87efff2-44fc-41dc-ab67-1ddd82a48f53 '!=' a87efff2-44fc-41dc-ab67-1ddd82a48f53 ']' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.419 [2024-11-17 01:37:01.796904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.419 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.420 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.420 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.420 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.420 "name": "raid_bdev1", 00:16:53.420 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:53.420 "strip_size_kb": 64, 00:16:53.420 "state": "online", 00:16:53.420 "raid_level": "raid5f", 00:16:53.420 "superblock": true, 00:16:53.420 "num_base_bdevs": 4, 00:16:53.420 "num_base_bdevs_discovered": 3, 00:16:53.420 "num_base_bdevs_operational": 3, 00:16:53.420 "base_bdevs_list": [ 00:16:53.420 { 00:16:53.420 "name": null, 00:16:53.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.420 "is_configured": false, 00:16:53.420 "data_offset": 0, 00:16:53.420 "data_size": 63488 00:16:53.420 }, 00:16:53.420 { 00:16:53.420 "name": "pt2", 00:16:53.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.420 "is_configured": true, 00:16:53.420 "data_offset": 2048, 00:16:53.420 "data_size": 63488 00:16:53.420 }, 00:16:53.420 { 00:16:53.420 "name": "pt3", 00:16:53.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.420 "is_configured": true, 00:16:53.420 "data_offset": 2048, 00:16:53.420 "data_size": 63488 00:16:53.420 }, 00:16:53.420 { 00:16:53.420 "name": "pt4", 00:16:53.420 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.420 "is_configured": true, 00:16:53.420 "data_offset": 2048, 00:16:53.420 "data_size": 63488 00:16:53.420 } 00:16:53.420 ] 00:16:53.420 }' 00:16:53.420 01:37:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.420 01:37:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.990 [2024-11-17 01:37:02.212178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.990 [2024-11-17 01:37:02.212243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.990 [2024-11-17 01:37:02.212354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.990 [2024-11-17 01:37:02.212452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.990 [2024-11-17 01:37:02.212522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:53.990 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.991 [2024-11-17 01:37:02.300003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.991 [2024-11-17 01:37:02.300088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.991 [2024-11-17 01:37:02.300121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:53.991 [2024-11-17 01:37:02.300147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.991 [2024-11-17 01:37:02.302291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.991 [2024-11-17 01:37:02.302365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.991 [2024-11-17 01:37:02.302479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:53.991 [2024-11-17 01:37:02.302545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.991 pt2 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.991 "name": "raid_bdev1", 00:16:53.991 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:53.991 "strip_size_kb": 64, 00:16:53.991 "state": "configuring", 00:16:53.991 "raid_level": "raid5f", 00:16:53.991 "superblock": true, 00:16:53.991 "num_base_bdevs": 4, 00:16:53.991 "num_base_bdevs_discovered": 1, 00:16:53.991 "num_base_bdevs_operational": 3, 00:16:53.991 "base_bdevs_list": [ 00:16:53.991 { 00:16:53.991 "name": null, 00:16:53.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.991 "is_configured": false, 00:16:53.991 "data_offset": 2048, 00:16:53.991 "data_size": 63488 00:16:53.991 }, 00:16:53.991 { 00:16:53.991 "name": "pt2", 00:16:53.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.991 "is_configured": true, 00:16:53.991 "data_offset": 2048, 00:16:53.991 "data_size": 63488 00:16:53.991 }, 00:16:53.991 { 00:16:53.991 "name": null, 00:16:53.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.991 "is_configured": false, 00:16:53.991 "data_offset": 2048, 00:16:53.991 "data_size": 63488 00:16:53.991 }, 00:16:53.991 { 00:16:53.991 "name": null, 00:16:53.991 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.991 "is_configured": false, 00:16:53.991 "data_offset": 2048, 00:16:53.991 "data_size": 63488 00:16:53.991 } 00:16:53.991 ] 00:16:53.991 }' 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.991 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.561 [2024-11-17 01:37:02.719272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:54.561 [2024-11-17 01:37:02.719356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.561 [2024-11-17 01:37:02.719389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:54.561 [2024-11-17 01:37:02.719416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.561 [2024-11-17 01:37:02.719793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.561 [2024-11-17 01:37:02.719845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:54.561 [2024-11-17 01:37:02.719935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:54.561 [2024-11-17 01:37:02.719988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:54.561 pt3 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.561 "name": "raid_bdev1", 00:16:54.561 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:54.561 "strip_size_kb": 64, 00:16:54.561 "state": "configuring", 00:16:54.561 "raid_level": "raid5f", 00:16:54.561 "superblock": true, 00:16:54.561 "num_base_bdevs": 4, 00:16:54.561 "num_base_bdevs_discovered": 2, 00:16:54.561 "num_base_bdevs_operational": 3, 00:16:54.561 "base_bdevs_list": [ 00:16:54.561 { 00:16:54.561 "name": null, 00:16:54.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.561 "is_configured": false, 00:16:54.561 "data_offset": 2048, 00:16:54.561 "data_size": 63488 00:16:54.561 }, 00:16:54.561 { 00:16:54.561 "name": "pt2", 00:16:54.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.561 "is_configured": true, 00:16:54.561 "data_offset": 2048, 00:16:54.561 "data_size": 63488 00:16:54.561 }, 00:16:54.561 { 00:16:54.561 "name": "pt3", 00:16:54.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.561 "is_configured": true, 00:16:54.561 "data_offset": 2048, 00:16:54.561 "data_size": 63488 00:16:54.561 }, 00:16:54.561 { 00:16:54.561 "name": null, 00:16:54.561 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.561 "is_configured": false, 00:16:54.561 "data_offset": 2048, 00:16:54.561 "data_size": 63488 00:16:54.561 } 00:16:54.561 ] 00:16:54.561 }' 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.561 01:37:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.822 [2024-11-17 01:37:03.134580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:54.822 [2024-11-17 01:37:03.134672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.822 [2024-11-17 01:37:03.134708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:54.822 [2024-11-17 01:37:03.134735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.822 [2024-11-17 01:37:03.135144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.822 [2024-11-17 01:37:03.135206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:54.822 [2024-11-17 01:37:03.135318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:54.822 [2024-11-17 01:37:03.135369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:54.822 [2024-11-17 01:37:03.135532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:54.822 [2024-11-17 01:37:03.135571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:54.822 [2024-11-17 01:37:03.135839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:54.822 [2024-11-17 01:37:03.142662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:54.822 [2024-11-17 01:37:03.142720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:54.822 [2024-11-17 01:37:03.143093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.822 pt4 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.822 "name": "raid_bdev1", 00:16:54.822 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:54.822 "strip_size_kb": 64, 00:16:54.822 "state": "online", 00:16:54.822 "raid_level": "raid5f", 00:16:54.822 "superblock": true, 00:16:54.822 "num_base_bdevs": 4, 00:16:54.822 "num_base_bdevs_discovered": 3, 00:16:54.822 "num_base_bdevs_operational": 3, 00:16:54.822 "base_bdevs_list": [ 00:16:54.822 { 00:16:54.822 "name": null, 00:16:54.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.822 "is_configured": false, 00:16:54.822 "data_offset": 2048, 00:16:54.822 "data_size": 63488 00:16:54.822 }, 00:16:54.822 { 00:16:54.822 "name": "pt2", 00:16:54.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.822 "is_configured": true, 00:16:54.822 "data_offset": 2048, 00:16:54.822 "data_size": 63488 00:16:54.822 }, 00:16:54.822 { 00:16:54.822 "name": "pt3", 00:16:54.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.822 "is_configured": true, 00:16:54.822 "data_offset": 2048, 00:16:54.822 "data_size": 63488 00:16:54.822 }, 00:16:54.822 { 00:16:54.822 "name": "pt4", 00:16:54.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.822 "is_configured": true, 00:16:54.822 "data_offset": 2048, 00:16:54.822 "data_size": 63488 00:16:54.822 } 00:16:54.822 ] 00:16:54.822 }' 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.822 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.392 [2024-11-17 01:37:03.591199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.392 [2024-11-17 01:37:03.591261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.392 [2024-11-17 01:37:03.591341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.392 [2024-11-17 01:37:03.591422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.392 [2024-11-17 01:37:03.591467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.392 [2024-11-17 01:37:03.647095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.392 [2024-11-17 01:37:03.647191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.392 [2024-11-17 01:37:03.647232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:55.392 [2024-11-17 01:37:03.647290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.392 [2024-11-17 01:37:03.649452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.392 [2024-11-17 01:37:03.649539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.392 [2024-11-17 01:37:03.649634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:55.392 [2024-11-17 01:37:03.649703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.392 [2024-11-17 01:37:03.649906] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:55.392 [2024-11-17 01:37:03.649963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:55.392 [2024-11-17 01:37:03.650007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:55.392 [2024-11-17 01:37:03.650094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.392 [2024-11-17 01:37:03.650226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:55.392 pt1 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.392 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.392 "name": "raid_bdev1", 00:16:55.392 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:55.392 "strip_size_kb": 64, 00:16:55.392 "state": "configuring", 00:16:55.392 "raid_level": "raid5f", 00:16:55.392 "superblock": true, 00:16:55.392 "num_base_bdevs": 4, 00:16:55.393 "num_base_bdevs_discovered": 2, 00:16:55.393 "num_base_bdevs_operational": 3, 00:16:55.393 "base_bdevs_list": [ 00:16:55.393 { 00:16:55.393 "name": null, 00:16:55.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.393 "is_configured": false, 00:16:55.393 "data_offset": 2048, 00:16:55.393 "data_size": 63488 00:16:55.393 }, 00:16:55.393 { 00:16:55.393 "name": "pt2", 00:16:55.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.393 "is_configured": true, 00:16:55.393 "data_offset": 2048, 00:16:55.393 "data_size": 63488 00:16:55.393 }, 00:16:55.393 { 00:16:55.393 "name": "pt3", 00:16:55.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.393 "is_configured": true, 00:16:55.393 "data_offset": 2048, 00:16:55.393 "data_size": 63488 00:16:55.393 }, 00:16:55.393 { 00:16:55.393 "name": null, 00:16:55.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.393 "is_configured": false, 00:16:55.393 "data_offset": 2048, 00:16:55.393 "data_size": 63488 00:16:55.393 } 00:16:55.393 ] 00:16:55.393 }' 00:16:55.393 01:37:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.393 01:37:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.653 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:55.653 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.653 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.653 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:55.653 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.913 [2024-11-17 01:37:04.134282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:55.913 [2024-11-17 01:37:04.134384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.913 [2024-11-17 01:37:04.134422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:55.913 [2024-11-17 01:37:04.134450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.913 [2024-11-17 01:37:04.134848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.913 [2024-11-17 01:37:04.134903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:55.913 [2024-11-17 01:37:04.134997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:55.913 [2024-11-17 01:37:04.135053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:55.913 [2024-11-17 01:37:04.135244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:55.913 [2024-11-17 01:37:04.135289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:55.913 [2024-11-17 01:37:04.135564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:55.913 [2024-11-17 01:37:04.142801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:55.913 [2024-11-17 01:37:04.142823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:55.913 [2024-11-17 01:37:04.143079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.913 pt4 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.913 "name": "raid_bdev1", 00:16:55.913 "uuid": "a87efff2-44fc-41dc-ab67-1ddd82a48f53", 00:16:55.913 "strip_size_kb": 64, 00:16:55.913 "state": "online", 00:16:55.913 "raid_level": "raid5f", 00:16:55.913 "superblock": true, 00:16:55.913 "num_base_bdevs": 4, 00:16:55.913 "num_base_bdevs_discovered": 3, 00:16:55.913 "num_base_bdevs_operational": 3, 00:16:55.913 "base_bdevs_list": [ 00:16:55.913 { 00:16:55.913 "name": null, 00:16:55.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.913 "is_configured": false, 00:16:55.913 "data_offset": 2048, 00:16:55.913 "data_size": 63488 00:16:55.913 }, 00:16:55.913 { 00:16:55.913 "name": "pt2", 00:16:55.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:55.913 "is_configured": true, 00:16:55.913 "data_offset": 2048, 00:16:55.913 "data_size": 63488 00:16:55.913 }, 00:16:55.913 { 00:16:55.913 "name": "pt3", 00:16:55.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:55.913 "is_configured": true, 00:16:55.913 "data_offset": 2048, 00:16:55.913 "data_size": 63488 00:16:55.913 }, 00:16:55.913 { 00:16:55.913 "name": "pt4", 00:16:55.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:55.913 "is_configured": true, 00:16:55.913 "data_offset": 2048, 00:16:55.913 "data_size": 63488 00:16:55.913 } 00:16:55.913 ] 00:16:55.913 }' 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.913 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.173 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:56.173 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:56.173 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.174 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.174 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.174 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:56.174 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.174 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:56.174 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.174 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.174 [2024-11-17 01:37:04.619191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a87efff2-44fc-41dc-ab67-1ddd82a48f53 '!=' a87efff2-44fc-41dc-ab67-1ddd82a48f53 ']' 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83850 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83850 ']' 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83850 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83850 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83850' 00:16:56.433 killing process with pid 83850 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83850 00:16:56.433 [2024-11-17 01:37:04.691983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.433 [2024-11-17 01:37:04.692066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.433 [2024-11-17 01:37:04.692134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.433 [2024-11-17 01:37:04.692145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:56.433 01:37:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83850 00:16:56.693 [2024-11-17 01:37:05.062590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.075 01:37:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:58.075 00:16:58.075 real 0m8.269s 00:16:58.075 user 0m12.981s 00:16:58.075 sys 0m1.556s 00:16:58.075 ************************************ 00:16:58.075 END TEST raid5f_superblock_test 00:16:58.075 ************************************ 00:16:58.075 01:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.075 01:37:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.075 01:37:06 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:58.075 01:37:06 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:58.075 01:37:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:58.075 01:37:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.075 01:37:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.075 ************************************ 00:16:58.075 START TEST raid5f_rebuild_test 00:16:58.075 ************************************ 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84334 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84334 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84334 ']' 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.075 01:37:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.075 [2024-11-17 01:37:06.265605] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:58.075 [2024-11-17 01:37:06.265811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:58.075 Zero copy mechanism will not be used. 00:16:58.075 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84334 ] 00:16:58.075 [2024-11-17 01:37:06.436420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.335 [2024-11-17 01:37:06.545246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.335 [2024-11-17 01:37:06.733096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.335 [2024-11-17 01:37:06.733226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 BaseBdev1_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 [2024-11-17 01:37:07.122260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:58.905 [2024-11-17 01:37:07.122338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.905 [2024-11-17 01:37:07.122361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:58.905 [2024-11-17 01:37:07.122372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.905 [2024-11-17 01:37:07.124371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.905 [2024-11-17 01:37:07.124410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.905 BaseBdev1 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 BaseBdev2_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 [2024-11-17 01:37:07.176595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:58.905 [2024-11-17 01:37:07.176706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.905 [2024-11-17 01:37:07.176740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:58.905 [2024-11-17 01:37:07.176797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.905 [2024-11-17 01:37:07.178743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.905 [2024-11-17 01:37:07.178819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:58.905 BaseBdev2 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 BaseBdev3_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 [2024-11-17 01:37:07.242170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:58.905 [2024-11-17 01:37:07.242276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.905 [2024-11-17 01:37:07.242312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:58.905 [2024-11-17 01:37:07.242341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.905 [2024-11-17 01:37:07.244312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.905 [2024-11-17 01:37:07.244398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:58.905 BaseBdev3 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.905 BaseBdev4_malloc 00:16:58.905 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.906 [2024-11-17 01:37:07.292267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:58.906 [2024-11-17 01:37:07.292384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.906 [2024-11-17 01:37:07.292416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:58.906 [2024-11-17 01:37:07.292444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.906 [2024-11-17 01:37:07.294418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.906 [2024-11-17 01:37:07.294489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:58.906 BaseBdev4 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.906 spare_malloc 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.906 spare_delay 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.906 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.906 [2024-11-17 01:37:07.358986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:58.906 [2024-11-17 01:37:07.359078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.906 [2024-11-17 01:37:07.359114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:58.906 [2024-11-17 01:37:07.359142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.906 [2024-11-17 01:37:07.361123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.906 [2024-11-17 01:37:07.361204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:59.166 spare 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.166 [2024-11-17 01:37:07.371019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.166 [2024-11-17 01:37:07.372801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.166 [2024-11-17 01:37:07.372908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.166 [2024-11-17 01:37:07.372977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:59.166 [2024-11-17 01:37:07.373109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:59.166 [2024-11-17 01:37:07.373151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:59.166 [2024-11-17 01:37:07.373398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:59.166 [2024-11-17 01:37:07.380446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:59.166 [2024-11-17 01:37:07.380509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:59.166 [2024-11-17 01:37:07.380742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.166 "name": "raid_bdev1", 00:16:59.166 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:16:59.166 "strip_size_kb": 64, 00:16:59.166 "state": "online", 00:16:59.166 "raid_level": "raid5f", 00:16:59.166 "superblock": false, 00:16:59.166 "num_base_bdevs": 4, 00:16:59.166 "num_base_bdevs_discovered": 4, 00:16:59.166 "num_base_bdevs_operational": 4, 00:16:59.166 "base_bdevs_list": [ 00:16:59.166 { 00:16:59.166 "name": "BaseBdev1", 00:16:59.166 "uuid": "679c4db3-8be4-5a8a-a289-2eb691a5bd68", 00:16:59.166 "is_configured": true, 00:16:59.166 "data_offset": 0, 00:16:59.166 "data_size": 65536 00:16:59.166 }, 00:16:59.166 { 00:16:59.166 "name": "BaseBdev2", 00:16:59.166 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:16:59.166 "is_configured": true, 00:16:59.166 "data_offset": 0, 00:16:59.166 "data_size": 65536 00:16:59.166 }, 00:16:59.166 { 00:16:59.166 "name": "BaseBdev3", 00:16:59.166 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:16:59.166 "is_configured": true, 00:16:59.166 "data_offset": 0, 00:16:59.166 "data_size": 65536 00:16:59.166 }, 00:16:59.166 { 00:16:59.166 "name": "BaseBdev4", 00:16:59.166 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:16:59.166 "is_configured": true, 00:16:59.166 "data_offset": 0, 00:16:59.166 "data_size": 65536 00:16:59.166 } 00:16:59.166 ] 00:16:59.166 }' 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.166 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:59.426 [2024-11-17 01:37:07.812337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:59.426 01:37:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.685 01:37:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:59.685 [2024-11-17 01:37:08.063755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:59.685 /dev/nbd0 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:59.685 1+0 records in 00:16:59.685 1+0 records out 00:16:59.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397094 s, 10.3 MB/s 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.685 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:59.945 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:59.945 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:59.945 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.945 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:59.945 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:59.945 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:59.945 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:00.205 512+0 records in 00:17:00.205 512+0 records out 00:17:00.205 100663296 bytes (101 MB, 96 MiB) copied, 0.45286 s, 222 MB/s 00:17:00.205 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:00.205 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.205 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:00.205 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.205 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:00.205 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.205 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.465 [2024-11-17 01:37:08.809918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.465 [2024-11-17 01:37:08.832198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.465 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.465 "name": "raid_bdev1", 00:17:00.465 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:00.465 "strip_size_kb": 64, 00:17:00.465 "state": "online", 00:17:00.465 "raid_level": "raid5f", 00:17:00.466 "superblock": false, 00:17:00.466 "num_base_bdevs": 4, 00:17:00.466 "num_base_bdevs_discovered": 3, 00:17:00.466 "num_base_bdevs_operational": 3, 00:17:00.466 "base_bdevs_list": [ 00:17:00.466 { 00:17:00.466 "name": null, 00:17:00.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.466 "is_configured": false, 00:17:00.466 "data_offset": 0, 00:17:00.466 "data_size": 65536 00:17:00.466 }, 00:17:00.466 { 00:17:00.466 "name": "BaseBdev2", 00:17:00.466 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:00.466 "is_configured": true, 00:17:00.466 "data_offset": 0, 00:17:00.466 "data_size": 65536 00:17:00.466 }, 00:17:00.466 { 00:17:00.466 "name": "BaseBdev3", 00:17:00.466 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:00.466 "is_configured": true, 00:17:00.466 "data_offset": 0, 00:17:00.466 "data_size": 65536 00:17:00.466 }, 00:17:00.466 { 00:17:00.466 "name": "BaseBdev4", 00:17:00.466 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:00.466 "is_configured": true, 00:17:00.466 "data_offset": 0, 00:17:00.466 "data_size": 65536 00:17:00.466 } 00:17:00.466 ] 00:17:00.466 }' 00:17:00.466 01:37:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.466 01:37:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.035 01:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.035 01:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.035 01:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.035 [2024-11-17 01:37:09.255513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.035 [2024-11-17 01:37:09.270244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:01.035 01:37:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.035 01:37:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:01.035 [2024-11-17 01:37:09.278999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.975 "name": "raid_bdev1", 00:17:01.975 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:01.975 "strip_size_kb": 64, 00:17:01.975 "state": "online", 00:17:01.975 "raid_level": "raid5f", 00:17:01.975 "superblock": false, 00:17:01.975 "num_base_bdevs": 4, 00:17:01.975 "num_base_bdevs_discovered": 4, 00:17:01.975 "num_base_bdevs_operational": 4, 00:17:01.975 "process": { 00:17:01.975 "type": "rebuild", 00:17:01.975 "target": "spare", 00:17:01.975 "progress": { 00:17:01.975 "blocks": 19200, 00:17:01.975 "percent": 9 00:17:01.975 } 00:17:01.975 }, 00:17:01.975 "base_bdevs_list": [ 00:17:01.975 { 00:17:01.975 "name": "spare", 00:17:01.975 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:01.975 "is_configured": true, 00:17:01.975 "data_offset": 0, 00:17:01.975 "data_size": 65536 00:17:01.975 }, 00:17:01.975 { 00:17:01.975 "name": "BaseBdev2", 00:17:01.975 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:01.975 "is_configured": true, 00:17:01.975 "data_offset": 0, 00:17:01.975 "data_size": 65536 00:17:01.975 }, 00:17:01.975 { 00:17:01.975 "name": "BaseBdev3", 00:17:01.975 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:01.975 "is_configured": true, 00:17:01.975 "data_offset": 0, 00:17:01.975 "data_size": 65536 00:17:01.975 }, 00:17:01.975 { 00:17:01.975 "name": "BaseBdev4", 00:17:01.975 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:01.975 "is_configured": true, 00:17:01.975 "data_offset": 0, 00:17:01.975 "data_size": 65536 00:17:01.975 } 00:17:01.975 ] 00:17:01.975 }' 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.975 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.975 [2024-11-17 01:37:10.413599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.236 [2024-11-17 01:37:10.484498] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:02.236 [2024-11-17 01:37:10.484623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.236 [2024-11-17 01:37:10.484661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.236 [2024-11-17 01:37:10.484685] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.236 "name": "raid_bdev1", 00:17:02.236 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:02.236 "strip_size_kb": 64, 00:17:02.236 "state": "online", 00:17:02.236 "raid_level": "raid5f", 00:17:02.236 "superblock": false, 00:17:02.236 "num_base_bdevs": 4, 00:17:02.236 "num_base_bdevs_discovered": 3, 00:17:02.236 "num_base_bdevs_operational": 3, 00:17:02.236 "base_bdevs_list": [ 00:17:02.236 { 00:17:02.236 "name": null, 00:17:02.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.236 "is_configured": false, 00:17:02.236 "data_offset": 0, 00:17:02.236 "data_size": 65536 00:17:02.236 }, 00:17:02.236 { 00:17:02.236 "name": "BaseBdev2", 00:17:02.236 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:02.236 "is_configured": true, 00:17:02.236 "data_offset": 0, 00:17:02.236 "data_size": 65536 00:17:02.236 }, 00:17:02.236 { 00:17:02.236 "name": "BaseBdev3", 00:17:02.236 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:02.236 "is_configured": true, 00:17:02.236 "data_offset": 0, 00:17:02.236 "data_size": 65536 00:17:02.236 }, 00:17:02.236 { 00:17:02.236 "name": "BaseBdev4", 00:17:02.236 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:02.236 "is_configured": true, 00:17:02.236 "data_offset": 0, 00:17:02.236 "data_size": 65536 00:17:02.236 } 00:17:02.236 ] 00:17:02.236 }' 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.236 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.496 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.496 "name": "raid_bdev1", 00:17:02.496 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:02.496 "strip_size_kb": 64, 00:17:02.496 "state": "online", 00:17:02.496 "raid_level": "raid5f", 00:17:02.496 "superblock": false, 00:17:02.496 "num_base_bdevs": 4, 00:17:02.496 "num_base_bdevs_discovered": 3, 00:17:02.496 "num_base_bdevs_operational": 3, 00:17:02.496 "base_bdevs_list": [ 00:17:02.496 { 00:17:02.496 "name": null, 00:17:02.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.497 "is_configured": false, 00:17:02.497 "data_offset": 0, 00:17:02.497 "data_size": 65536 00:17:02.497 }, 00:17:02.497 { 00:17:02.497 "name": "BaseBdev2", 00:17:02.497 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:02.497 "is_configured": true, 00:17:02.497 "data_offset": 0, 00:17:02.497 "data_size": 65536 00:17:02.497 }, 00:17:02.497 { 00:17:02.497 "name": "BaseBdev3", 00:17:02.497 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:02.497 "is_configured": true, 00:17:02.497 "data_offset": 0, 00:17:02.497 "data_size": 65536 00:17:02.497 }, 00:17:02.497 { 00:17:02.497 "name": "BaseBdev4", 00:17:02.497 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:02.497 "is_configured": true, 00:17:02.497 "data_offset": 0, 00:17:02.497 "data_size": 65536 00:17:02.497 } 00:17:02.497 ] 00:17:02.497 }' 00:17:02.497 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.757 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.757 01:37:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.757 01:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.757 01:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.757 01:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.757 01:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.757 [2024-11-17 01:37:11.025176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.757 [2024-11-17 01:37:11.039637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:02.757 01:37:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.757 01:37:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:02.757 [2024-11-17 01:37:11.048424] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.705 "name": "raid_bdev1", 00:17:03.705 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:03.705 "strip_size_kb": 64, 00:17:03.705 "state": "online", 00:17:03.705 "raid_level": "raid5f", 00:17:03.705 "superblock": false, 00:17:03.705 "num_base_bdevs": 4, 00:17:03.705 "num_base_bdevs_discovered": 4, 00:17:03.705 "num_base_bdevs_operational": 4, 00:17:03.705 "process": { 00:17:03.705 "type": "rebuild", 00:17:03.705 "target": "spare", 00:17:03.705 "progress": { 00:17:03.705 "blocks": 19200, 00:17:03.705 "percent": 9 00:17:03.705 } 00:17:03.705 }, 00:17:03.705 "base_bdevs_list": [ 00:17:03.705 { 00:17:03.705 "name": "spare", 00:17:03.705 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:03.705 "is_configured": true, 00:17:03.705 "data_offset": 0, 00:17:03.705 "data_size": 65536 00:17:03.705 }, 00:17:03.705 { 00:17:03.705 "name": "BaseBdev2", 00:17:03.705 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:03.705 "is_configured": true, 00:17:03.705 "data_offset": 0, 00:17:03.705 "data_size": 65536 00:17:03.705 }, 00:17:03.705 { 00:17:03.705 "name": "BaseBdev3", 00:17:03.705 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:03.705 "is_configured": true, 00:17:03.705 "data_offset": 0, 00:17:03.705 "data_size": 65536 00:17:03.705 }, 00:17:03.705 { 00:17:03.705 "name": "BaseBdev4", 00:17:03.705 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:03.705 "is_configured": true, 00:17:03.705 "data_offset": 0, 00:17:03.705 "data_size": 65536 00:17:03.705 } 00:17:03.705 ] 00:17:03.705 }' 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.705 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=606 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.982 "name": "raid_bdev1", 00:17:03.982 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:03.982 "strip_size_kb": 64, 00:17:03.982 "state": "online", 00:17:03.982 "raid_level": "raid5f", 00:17:03.982 "superblock": false, 00:17:03.982 "num_base_bdevs": 4, 00:17:03.982 "num_base_bdevs_discovered": 4, 00:17:03.982 "num_base_bdevs_operational": 4, 00:17:03.982 "process": { 00:17:03.982 "type": "rebuild", 00:17:03.982 "target": "spare", 00:17:03.982 "progress": { 00:17:03.982 "blocks": 21120, 00:17:03.982 "percent": 10 00:17:03.982 } 00:17:03.982 }, 00:17:03.982 "base_bdevs_list": [ 00:17:03.982 { 00:17:03.982 "name": "spare", 00:17:03.982 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:03.982 "is_configured": true, 00:17:03.982 "data_offset": 0, 00:17:03.982 "data_size": 65536 00:17:03.982 }, 00:17:03.982 { 00:17:03.982 "name": "BaseBdev2", 00:17:03.982 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:03.982 "is_configured": true, 00:17:03.982 "data_offset": 0, 00:17:03.982 "data_size": 65536 00:17:03.982 }, 00:17:03.982 { 00:17:03.982 "name": "BaseBdev3", 00:17:03.982 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:03.982 "is_configured": true, 00:17:03.982 "data_offset": 0, 00:17:03.982 "data_size": 65536 00:17:03.982 }, 00:17:03.982 { 00:17:03.982 "name": "BaseBdev4", 00:17:03.982 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:03.982 "is_configured": true, 00:17:03.982 "data_offset": 0, 00:17:03.982 "data_size": 65536 00:17:03.982 } 00:17:03.982 ] 00:17:03.982 }' 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.982 01:37:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.936 "name": "raid_bdev1", 00:17:04.936 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:04.936 "strip_size_kb": 64, 00:17:04.936 "state": "online", 00:17:04.936 "raid_level": "raid5f", 00:17:04.936 "superblock": false, 00:17:04.936 "num_base_bdevs": 4, 00:17:04.936 "num_base_bdevs_discovered": 4, 00:17:04.936 "num_base_bdevs_operational": 4, 00:17:04.936 "process": { 00:17:04.936 "type": "rebuild", 00:17:04.936 "target": "spare", 00:17:04.936 "progress": { 00:17:04.936 "blocks": 42240, 00:17:04.936 "percent": 21 00:17:04.936 } 00:17:04.936 }, 00:17:04.936 "base_bdevs_list": [ 00:17:04.936 { 00:17:04.936 "name": "spare", 00:17:04.936 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:04.936 "is_configured": true, 00:17:04.936 "data_offset": 0, 00:17:04.936 "data_size": 65536 00:17:04.936 }, 00:17:04.936 { 00:17:04.936 "name": "BaseBdev2", 00:17:04.936 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:04.936 "is_configured": true, 00:17:04.936 "data_offset": 0, 00:17:04.936 "data_size": 65536 00:17:04.936 }, 00:17:04.936 { 00:17:04.936 "name": "BaseBdev3", 00:17:04.936 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:04.936 "is_configured": true, 00:17:04.936 "data_offset": 0, 00:17:04.936 "data_size": 65536 00:17:04.936 }, 00:17:04.936 { 00:17:04.936 "name": "BaseBdev4", 00:17:04.936 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:04.936 "is_configured": true, 00:17:04.936 "data_offset": 0, 00:17:04.936 "data_size": 65536 00:17:04.936 } 00:17:04.936 ] 00:17:04.936 }' 00:17:04.936 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.196 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.196 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.196 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.196 01:37:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.136 "name": "raid_bdev1", 00:17:06.136 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:06.136 "strip_size_kb": 64, 00:17:06.136 "state": "online", 00:17:06.136 "raid_level": "raid5f", 00:17:06.136 "superblock": false, 00:17:06.136 "num_base_bdevs": 4, 00:17:06.136 "num_base_bdevs_discovered": 4, 00:17:06.136 "num_base_bdevs_operational": 4, 00:17:06.136 "process": { 00:17:06.136 "type": "rebuild", 00:17:06.136 "target": "spare", 00:17:06.136 "progress": { 00:17:06.136 "blocks": 65280, 00:17:06.136 "percent": 33 00:17:06.136 } 00:17:06.136 }, 00:17:06.136 "base_bdevs_list": [ 00:17:06.136 { 00:17:06.136 "name": "spare", 00:17:06.136 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:06.136 "is_configured": true, 00:17:06.136 "data_offset": 0, 00:17:06.136 "data_size": 65536 00:17:06.136 }, 00:17:06.136 { 00:17:06.136 "name": "BaseBdev2", 00:17:06.136 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:06.136 "is_configured": true, 00:17:06.136 "data_offset": 0, 00:17:06.136 "data_size": 65536 00:17:06.136 }, 00:17:06.136 { 00:17:06.136 "name": "BaseBdev3", 00:17:06.136 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:06.136 "is_configured": true, 00:17:06.136 "data_offset": 0, 00:17:06.136 "data_size": 65536 00:17:06.136 }, 00:17:06.136 { 00:17:06.136 "name": "BaseBdev4", 00:17:06.136 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:06.136 "is_configured": true, 00:17:06.136 "data_offset": 0, 00:17:06.136 "data_size": 65536 00:17:06.136 } 00:17:06.136 ] 00:17:06.136 }' 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.136 01:37:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.516 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.516 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.516 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.516 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.517 "name": "raid_bdev1", 00:17:07.517 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:07.517 "strip_size_kb": 64, 00:17:07.517 "state": "online", 00:17:07.517 "raid_level": "raid5f", 00:17:07.517 "superblock": false, 00:17:07.517 "num_base_bdevs": 4, 00:17:07.517 "num_base_bdevs_discovered": 4, 00:17:07.517 "num_base_bdevs_operational": 4, 00:17:07.517 "process": { 00:17:07.517 "type": "rebuild", 00:17:07.517 "target": "spare", 00:17:07.517 "progress": { 00:17:07.517 "blocks": 86400, 00:17:07.517 "percent": 43 00:17:07.517 } 00:17:07.517 }, 00:17:07.517 "base_bdevs_list": [ 00:17:07.517 { 00:17:07.517 "name": "spare", 00:17:07.517 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:07.517 "is_configured": true, 00:17:07.517 "data_offset": 0, 00:17:07.517 "data_size": 65536 00:17:07.517 }, 00:17:07.517 { 00:17:07.517 "name": "BaseBdev2", 00:17:07.517 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:07.517 "is_configured": true, 00:17:07.517 "data_offset": 0, 00:17:07.517 "data_size": 65536 00:17:07.517 }, 00:17:07.517 { 00:17:07.517 "name": "BaseBdev3", 00:17:07.517 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:07.517 "is_configured": true, 00:17:07.517 "data_offset": 0, 00:17:07.517 "data_size": 65536 00:17:07.517 }, 00:17:07.517 { 00:17:07.517 "name": "BaseBdev4", 00:17:07.517 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:07.517 "is_configured": true, 00:17:07.517 "data_offset": 0, 00:17:07.517 "data_size": 65536 00:17:07.517 } 00:17:07.517 ] 00:17:07.517 }' 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.517 01:37:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.456 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.457 "name": "raid_bdev1", 00:17:08.457 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:08.457 "strip_size_kb": 64, 00:17:08.457 "state": "online", 00:17:08.457 "raid_level": "raid5f", 00:17:08.457 "superblock": false, 00:17:08.457 "num_base_bdevs": 4, 00:17:08.457 "num_base_bdevs_discovered": 4, 00:17:08.457 "num_base_bdevs_operational": 4, 00:17:08.457 "process": { 00:17:08.457 "type": "rebuild", 00:17:08.457 "target": "spare", 00:17:08.457 "progress": { 00:17:08.457 "blocks": 107520, 00:17:08.457 "percent": 54 00:17:08.457 } 00:17:08.457 }, 00:17:08.457 "base_bdevs_list": [ 00:17:08.457 { 00:17:08.457 "name": "spare", 00:17:08.457 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:08.457 "is_configured": true, 00:17:08.457 "data_offset": 0, 00:17:08.457 "data_size": 65536 00:17:08.457 }, 00:17:08.457 { 00:17:08.457 "name": "BaseBdev2", 00:17:08.457 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:08.457 "is_configured": true, 00:17:08.457 "data_offset": 0, 00:17:08.457 "data_size": 65536 00:17:08.457 }, 00:17:08.457 { 00:17:08.457 "name": "BaseBdev3", 00:17:08.457 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:08.457 "is_configured": true, 00:17:08.457 "data_offset": 0, 00:17:08.457 "data_size": 65536 00:17:08.457 }, 00:17:08.457 { 00:17:08.457 "name": "BaseBdev4", 00:17:08.457 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:08.457 "is_configured": true, 00:17:08.457 "data_offset": 0, 00:17:08.457 "data_size": 65536 00:17:08.457 } 00:17:08.457 ] 00:17:08.457 }' 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.457 01:37:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.396 01:37:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.397 01:37:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.397 01:37:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.397 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.397 "name": "raid_bdev1", 00:17:09.397 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:09.397 "strip_size_kb": 64, 00:17:09.397 "state": "online", 00:17:09.397 "raid_level": "raid5f", 00:17:09.397 "superblock": false, 00:17:09.397 "num_base_bdevs": 4, 00:17:09.397 "num_base_bdevs_discovered": 4, 00:17:09.397 "num_base_bdevs_operational": 4, 00:17:09.397 "process": { 00:17:09.397 "type": "rebuild", 00:17:09.397 "target": "spare", 00:17:09.397 "progress": { 00:17:09.397 "blocks": 128640, 00:17:09.397 "percent": 65 00:17:09.397 } 00:17:09.397 }, 00:17:09.397 "base_bdevs_list": [ 00:17:09.397 { 00:17:09.397 "name": "spare", 00:17:09.397 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:09.397 "is_configured": true, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 65536 00:17:09.397 }, 00:17:09.397 { 00:17:09.397 "name": "BaseBdev2", 00:17:09.397 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:09.397 "is_configured": true, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 65536 00:17:09.397 }, 00:17:09.397 { 00:17:09.397 "name": "BaseBdev3", 00:17:09.397 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:09.397 "is_configured": true, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 65536 00:17:09.397 }, 00:17:09.397 { 00:17:09.397 "name": "BaseBdev4", 00:17:09.397 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:09.397 "is_configured": true, 00:17:09.397 "data_offset": 0, 00:17:09.397 "data_size": 65536 00:17:09.397 } 00:17:09.397 ] 00:17:09.397 }' 00:17:09.657 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.657 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.657 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.657 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.657 01:37:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.597 01:37:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.597 01:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.597 "name": "raid_bdev1", 00:17:10.597 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:10.597 "strip_size_kb": 64, 00:17:10.597 "state": "online", 00:17:10.597 "raid_level": "raid5f", 00:17:10.597 "superblock": false, 00:17:10.597 "num_base_bdevs": 4, 00:17:10.597 "num_base_bdevs_discovered": 4, 00:17:10.597 "num_base_bdevs_operational": 4, 00:17:10.597 "process": { 00:17:10.597 "type": "rebuild", 00:17:10.597 "target": "spare", 00:17:10.597 "progress": { 00:17:10.597 "blocks": 149760, 00:17:10.597 "percent": 76 00:17:10.597 } 00:17:10.597 }, 00:17:10.597 "base_bdevs_list": [ 00:17:10.597 { 00:17:10.597 "name": "spare", 00:17:10.597 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:10.597 "is_configured": true, 00:17:10.597 "data_offset": 0, 00:17:10.597 "data_size": 65536 00:17:10.597 }, 00:17:10.597 { 00:17:10.597 "name": "BaseBdev2", 00:17:10.597 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:10.597 "is_configured": true, 00:17:10.597 "data_offset": 0, 00:17:10.597 "data_size": 65536 00:17:10.597 }, 00:17:10.597 { 00:17:10.597 "name": "BaseBdev3", 00:17:10.597 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:10.597 "is_configured": true, 00:17:10.597 "data_offset": 0, 00:17:10.597 "data_size": 65536 00:17:10.597 }, 00:17:10.597 { 00:17:10.597 "name": "BaseBdev4", 00:17:10.597 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:10.597 "is_configured": true, 00:17:10.597 "data_offset": 0, 00:17:10.597 "data_size": 65536 00:17:10.597 } 00:17:10.597 ] 00:17:10.597 }' 00:17:10.597 01:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.597 01:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.863 01:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.863 01:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.863 01:37:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.802 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.802 "name": "raid_bdev1", 00:17:11.802 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:11.802 "strip_size_kb": 64, 00:17:11.802 "state": "online", 00:17:11.802 "raid_level": "raid5f", 00:17:11.802 "superblock": false, 00:17:11.802 "num_base_bdevs": 4, 00:17:11.802 "num_base_bdevs_discovered": 4, 00:17:11.803 "num_base_bdevs_operational": 4, 00:17:11.803 "process": { 00:17:11.803 "type": "rebuild", 00:17:11.803 "target": "spare", 00:17:11.803 "progress": { 00:17:11.803 "blocks": 172800, 00:17:11.803 "percent": 87 00:17:11.803 } 00:17:11.803 }, 00:17:11.803 "base_bdevs_list": [ 00:17:11.803 { 00:17:11.803 "name": "spare", 00:17:11.803 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:11.803 "is_configured": true, 00:17:11.803 "data_offset": 0, 00:17:11.803 "data_size": 65536 00:17:11.803 }, 00:17:11.803 { 00:17:11.803 "name": "BaseBdev2", 00:17:11.803 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:11.803 "is_configured": true, 00:17:11.803 "data_offset": 0, 00:17:11.803 "data_size": 65536 00:17:11.803 }, 00:17:11.803 { 00:17:11.803 "name": "BaseBdev3", 00:17:11.803 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:11.803 "is_configured": true, 00:17:11.803 "data_offset": 0, 00:17:11.803 "data_size": 65536 00:17:11.803 }, 00:17:11.803 { 00:17:11.803 "name": "BaseBdev4", 00:17:11.803 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:11.803 "is_configured": true, 00:17:11.803 "data_offset": 0, 00:17:11.803 "data_size": 65536 00:17:11.803 } 00:17:11.803 ] 00:17:11.803 }' 00:17:11.803 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.803 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.803 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.803 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.803 01:37:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.232 "name": "raid_bdev1", 00:17:13.232 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:13.232 "strip_size_kb": 64, 00:17:13.232 "state": "online", 00:17:13.232 "raid_level": "raid5f", 00:17:13.232 "superblock": false, 00:17:13.232 "num_base_bdevs": 4, 00:17:13.232 "num_base_bdevs_discovered": 4, 00:17:13.232 "num_base_bdevs_operational": 4, 00:17:13.232 "process": { 00:17:13.232 "type": "rebuild", 00:17:13.232 "target": "spare", 00:17:13.232 "progress": { 00:17:13.232 "blocks": 193920, 00:17:13.232 "percent": 98 00:17:13.232 } 00:17:13.232 }, 00:17:13.232 "base_bdevs_list": [ 00:17:13.232 { 00:17:13.232 "name": "spare", 00:17:13.232 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:13.232 "is_configured": true, 00:17:13.232 "data_offset": 0, 00:17:13.232 "data_size": 65536 00:17:13.232 }, 00:17:13.232 { 00:17:13.232 "name": "BaseBdev2", 00:17:13.232 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:13.232 "is_configured": true, 00:17:13.232 "data_offset": 0, 00:17:13.232 "data_size": 65536 00:17:13.232 }, 00:17:13.232 { 00:17:13.232 "name": "BaseBdev3", 00:17:13.232 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:13.232 "is_configured": true, 00:17:13.232 "data_offset": 0, 00:17:13.232 "data_size": 65536 00:17:13.232 }, 00:17:13.232 { 00:17:13.232 "name": "BaseBdev4", 00:17:13.232 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:13.232 "is_configured": true, 00:17:13.232 "data_offset": 0, 00:17:13.232 "data_size": 65536 00:17:13.232 } 00:17:13.232 ] 00:17:13.232 }' 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.232 [2024-11-17 01:37:21.390814] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:13.232 [2024-11-17 01:37:21.390931] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:13.232 [2024-11-17 01:37:21.391001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.232 01:37:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.172 "name": "raid_bdev1", 00:17:14.172 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:14.172 "strip_size_kb": 64, 00:17:14.172 "state": "online", 00:17:14.172 "raid_level": "raid5f", 00:17:14.172 "superblock": false, 00:17:14.172 "num_base_bdevs": 4, 00:17:14.172 "num_base_bdevs_discovered": 4, 00:17:14.172 "num_base_bdevs_operational": 4, 00:17:14.172 "base_bdevs_list": [ 00:17:14.172 { 00:17:14.172 "name": "spare", 00:17:14.172 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 }, 00:17:14.172 { 00:17:14.172 "name": "BaseBdev2", 00:17:14.172 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 }, 00:17:14.172 { 00:17:14.172 "name": "BaseBdev3", 00:17:14.172 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 }, 00:17:14.172 { 00:17:14.172 "name": "BaseBdev4", 00:17:14.172 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 } 00:17:14.172 ] 00:17:14.172 }' 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.172 "name": "raid_bdev1", 00:17:14.172 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:14.172 "strip_size_kb": 64, 00:17:14.172 "state": "online", 00:17:14.172 "raid_level": "raid5f", 00:17:14.172 "superblock": false, 00:17:14.172 "num_base_bdevs": 4, 00:17:14.172 "num_base_bdevs_discovered": 4, 00:17:14.172 "num_base_bdevs_operational": 4, 00:17:14.172 "base_bdevs_list": [ 00:17:14.172 { 00:17:14.172 "name": "spare", 00:17:14.172 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 }, 00:17:14.172 { 00:17:14.172 "name": "BaseBdev2", 00:17:14.172 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 }, 00:17:14.172 { 00:17:14.172 "name": "BaseBdev3", 00:17:14.172 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 }, 00:17:14.172 { 00:17:14.172 "name": "BaseBdev4", 00:17:14.172 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:14.172 "is_configured": true, 00:17:14.172 "data_offset": 0, 00:17:14.172 "data_size": 65536 00:17:14.172 } 00:17:14.172 ] 00:17:14.172 }' 00:17:14.172 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.433 "name": "raid_bdev1", 00:17:14.433 "uuid": "2c7ce976-1255-4bba-b469-560cd9898eee", 00:17:14.433 "strip_size_kb": 64, 00:17:14.433 "state": "online", 00:17:14.433 "raid_level": "raid5f", 00:17:14.433 "superblock": false, 00:17:14.433 "num_base_bdevs": 4, 00:17:14.433 "num_base_bdevs_discovered": 4, 00:17:14.433 "num_base_bdevs_operational": 4, 00:17:14.433 "base_bdevs_list": [ 00:17:14.433 { 00:17:14.433 "name": "spare", 00:17:14.433 "uuid": "5d665a32-851d-531c-9d5b-b1a9498b9b38", 00:17:14.433 "is_configured": true, 00:17:14.433 "data_offset": 0, 00:17:14.433 "data_size": 65536 00:17:14.433 }, 00:17:14.433 { 00:17:14.433 "name": "BaseBdev2", 00:17:14.433 "uuid": "3c34c080-60ca-559d-bf9a-c75fd4ca358d", 00:17:14.433 "is_configured": true, 00:17:14.433 "data_offset": 0, 00:17:14.433 "data_size": 65536 00:17:14.433 }, 00:17:14.433 { 00:17:14.433 "name": "BaseBdev3", 00:17:14.433 "uuid": "8c3adb89-af0e-5dab-987e-0f077fa1953e", 00:17:14.433 "is_configured": true, 00:17:14.433 "data_offset": 0, 00:17:14.433 "data_size": 65536 00:17:14.433 }, 00:17:14.433 { 00:17:14.433 "name": "BaseBdev4", 00:17:14.433 "uuid": "a78e25b6-51e3-5f31-8973-c2a5b866b211", 00:17:14.433 "is_configured": true, 00:17:14.433 "data_offset": 0, 00:17:14.433 "data_size": 65536 00:17:14.433 } 00:17:14.433 ] 00:17:14.433 }' 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.433 01:37:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.004 [2024-11-17 01:37:23.177524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.004 [2024-11-17 01:37:23.177556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.004 [2024-11-17 01:37:23.177631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.004 [2024-11-17 01:37:23.177712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.004 [2024-11-17 01:37:23.177721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.004 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:15.004 /dev/nbd0 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.265 1+0 records in 00:17:15.265 1+0 records out 00:17:15.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499757 s, 8.2 MB/s 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:15.265 /dev/nbd1 00:17:15.265 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.525 1+0 records in 00:17:15.525 1+0 records out 00:17:15.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432519 s, 9.5 MB/s 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.525 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.526 01:37:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.786 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84334 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84334 ']' 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84334 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84334 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84334' 00:17:16.046 killing process with pid 84334 00:17:16.046 Received shutdown signal, test time was about 60.000000 seconds 00:17:16.046 00:17:16.046 Latency(us) 00:17:16.046 [2024-11-17T01:37:24.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.046 [2024-11-17T01:37:24.506Z] =================================================================================================================== 00:17:16.046 [2024-11-17T01:37:24.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84334 00:17:16.046 [2024-11-17 01:37:24.385622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.046 01:37:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84334 00:17:16.617 [2024-11-17 01:37:24.845671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.556 01:37:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:17.557 00:17:17.557 real 0m19.696s 00:17:17.557 user 0m23.453s 00:17:17.557 sys 0m2.181s 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.557 ************************************ 00:17:17.557 END TEST raid5f_rebuild_test 00:17:17.557 ************************************ 00:17:17.557 01:37:25 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:17.557 01:37:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:17.557 01:37:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.557 01:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.557 ************************************ 00:17:17.557 START TEST raid5f_rebuild_test_sb 00:17:17.557 ************************************ 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84853 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84853 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84853 ']' 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.557 01:37:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.817 [2024-11-17 01:37:26.046258] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:17.817 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:17.817 Zero copy mechanism will not be used. 00:17:17.817 [2024-11-17 01:37:26.046803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84853 ] 00:17:17.817 [2024-11-17 01:37:26.219333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.077 [2024-11-17 01:37:26.321243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.077 [2024-11-17 01:37:26.511233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.077 [2024-11-17 01:37:26.511262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.647 BaseBdev1_malloc 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.647 [2024-11-17 01:37:26.913269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.647 [2024-11-17 01:37:26.913633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.647 [2024-11-17 01:37:26.913668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:18.647 [2024-11-17 01:37:26.913680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.647 [2024-11-17 01:37:26.916084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.647 [2024-11-17 01:37:26.916230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.647 BaseBdev1 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.647 BaseBdev2_malloc 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.647 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.647 [2024-11-17 01:37:26.962828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:18.648 [2024-11-17 01:37:26.963031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.648 [2024-11-17 01:37:26.963095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:18.648 [2024-11-17 01:37:26.963173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.648 [2024-11-17 01:37:26.965170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.648 [2024-11-17 01:37:26.965258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:18.648 BaseBdev2 00:17:18.648 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.648 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:18.648 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:18.648 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.648 01:37:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.648 BaseBdev3_malloc 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.648 [2024-11-17 01:37:27.046448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:18.648 [2024-11-17 01:37:27.046825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.648 [2024-11-17 01:37:27.046922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:18.648 [2024-11-17 01:37:27.046975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.648 [2024-11-17 01:37:27.049013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.648 [2024-11-17 01:37:27.049105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:18.648 BaseBdev3 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.648 BaseBdev4_malloc 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.648 [2024-11-17 01:37:27.096239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:18.648 [2024-11-17 01:37:27.096420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.648 [2024-11-17 01:37:27.096475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:18.648 [2024-11-17 01:37:27.096529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.648 [2024-11-17 01:37:27.098504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.648 [2024-11-17 01:37:27.098629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:18.648 BaseBdev4 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.648 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.908 spare_malloc 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.908 spare_delay 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.908 [2024-11-17 01:37:27.164464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.908 [2024-11-17 01:37:27.164715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.908 [2024-11-17 01:37:27.164799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:18.908 [2024-11-17 01:37:27.164874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.908 [2024-11-17 01:37:27.166851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.908 [2024-11-17 01:37:27.166874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.908 spare 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.908 [2024-11-17 01:37:27.176496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.908 [2024-11-17 01:37:27.178215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.908 [2024-11-17 01:37:27.178273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.908 [2024-11-17 01:37:27.178322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:18.908 [2024-11-17 01:37:27.178500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:18.908 [2024-11-17 01:37:27.178516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:18.908 [2024-11-17 01:37:27.178745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:18.908 [2024-11-17 01:37:27.185692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:18.908 [2024-11-17 01:37:27.185710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:18.908 [2024-11-17 01:37:27.185944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.908 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.908 "name": "raid_bdev1", 00:17:18.908 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:18.908 "strip_size_kb": 64, 00:17:18.908 "state": "online", 00:17:18.908 "raid_level": "raid5f", 00:17:18.908 "superblock": true, 00:17:18.908 "num_base_bdevs": 4, 00:17:18.908 "num_base_bdevs_discovered": 4, 00:17:18.909 "num_base_bdevs_operational": 4, 00:17:18.909 "base_bdevs_list": [ 00:17:18.909 { 00:17:18.909 "name": "BaseBdev1", 00:17:18.909 "uuid": "b3304cc1-305e-5f0b-8183-d517cb5a3a0a", 00:17:18.909 "is_configured": true, 00:17:18.909 "data_offset": 2048, 00:17:18.909 "data_size": 63488 00:17:18.909 }, 00:17:18.909 { 00:17:18.909 "name": "BaseBdev2", 00:17:18.909 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:18.909 "is_configured": true, 00:17:18.909 "data_offset": 2048, 00:17:18.909 "data_size": 63488 00:17:18.909 }, 00:17:18.909 { 00:17:18.909 "name": "BaseBdev3", 00:17:18.909 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:18.909 "is_configured": true, 00:17:18.909 "data_offset": 2048, 00:17:18.909 "data_size": 63488 00:17:18.909 }, 00:17:18.909 { 00:17:18.909 "name": "BaseBdev4", 00:17:18.909 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:18.909 "is_configured": true, 00:17:18.909 "data_offset": 2048, 00:17:18.909 "data_size": 63488 00:17:18.909 } 00:17:18.909 ] 00:17:18.909 }' 00:17:18.909 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.909 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.169 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.169 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.169 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.169 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:19.169 [2024-11-17 01:37:27.585576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.169 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:19.429 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:19.429 [2024-11-17 01:37:27.856943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:19.429 /dev/nbd0 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.688 1+0 records in 00:17:19.688 1+0 records out 00:17:19.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383203 s, 10.7 MB/s 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:19.688 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:19.689 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.689 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:19.689 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:19.689 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:19.689 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:19.689 01:37:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:20.258 496+0 records in 00:17:20.258 496+0 records out 00:17:20.258 97517568 bytes (98 MB, 93 MiB) copied, 0.524645 s, 186 MB/s 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:20.258 [2024-11-17 01:37:28.679869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.258 [2024-11-17 01:37:28.708894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.258 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.518 "name": "raid_bdev1", 00:17:20.518 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:20.518 "strip_size_kb": 64, 00:17:20.518 "state": "online", 00:17:20.518 "raid_level": "raid5f", 00:17:20.518 "superblock": true, 00:17:20.518 "num_base_bdevs": 4, 00:17:20.518 "num_base_bdevs_discovered": 3, 00:17:20.518 "num_base_bdevs_operational": 3, 00:17:20.518 "base_bdevs_list": [ 00:17:20.518 { 00:17:20.518 "name": null, 00:17:20.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.518 "is_configured": false, 00:17:20.518 "data_offset": 0, 00:17:20.518 "data_size": 63488 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "name": "BaseBdev2", 00:17:20.518 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:20.518 "is_configured": true, 00:17:20.518 "data_offset": 2048, 00:17:20.518 "data_size": 63488 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "name": "BaseBdev3", 00:17:20.518 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:20.518 "is_configured": true, 00:17:20.518 "data_offset": 2048, 00:17:20.518 "data_size": 63488 00:17:20.518 }, 00:17:20.518 { 00:17:20.518 "name": "BaseBdev4", 00:17:20.518 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:20.518 "is_configured": true, 00:17:20.518 "data_offset": 2048, 00:17:20.518 "data_size": 63488 00:17:20.518 } 00:17:20.518 ] 00:17:20.518 }' 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.518 01:37:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.778 01:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:20.778 01:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.778 01:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.778 [2024-11-17 01:37:29.180061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.778 [2024-11-17 01:37:29.194444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:20.778 01:37:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.778 01:37:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:20.778 [2024-11-17 01:37:29.203376] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.159 "name": "raid_bdev1", 00:17:22.159 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:22.159 "strip_size_kb": 64, 00:17:22.159 "state": "online", 00:17:22.159 "raid_level": "raid5f", 00:17:22.159 "superblock": true, 00:17:22.159 "num_base_bdevs": 4, 00:17:22.159 "num_base_bdevs_discovered": 4, 00:17:22.159 "num_base_bdevs_operational": 4, 00:17:22.159 "process": { 00:17:22.159 "type": "rebuild", 00:17:22.159 "target": "spare", 00:17:22.159 "progress": { 00:17:22.159 "blocks": 19200, 00:17:22.159 "percent": 10 00:17:22.159 } 00:17:22.159 }, 00:17:22.159 "base_bdevs_list": [ 00:17:22.159 { 00:17:22.159 "name": "spare", 00:17:22.159 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:22.159 "is_configured": true, 00:17:22.159 "data_offset": 2048, 00:17:22.159 "data_size": 63488 00:17:22.159 }, 00:17:22.159 { 00:17:22.159 "name": "BaseBdev2", 00:17:22.159 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:22.159 "is_configured": true, 00:17:22.159 "data_offset": 2048, 00:17:22.159 "data_size": 63488 00:17:22.159 }, 00:17:22.159 { 00:17:22.159 "name": "BaseBdev3", 00:17:22.159 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:22.159 "is_configured": true, 00:17:22.159 "data_offset": 2048, 00:17:22.159 "data_size": 63488 00:17:22.159 }, 00:17:22.159 { 00:17:22.159 "name": "BaseBdev4", 00:17:22.159 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:22.159 "is_configured": true, 00:17:22.159 "data_offset": 2048, 00:17:22.159 "data_size": 63488 00:17:22.159 } 00:17:22.159 ] 00:17:22.159 }' 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.159 [2024-11-17 01:37:30.329945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.159 [2024-11-17 01:37:30.408775] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.159 [2024-11-17 01:37:30.408835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.159 [2024-11-17 01:37:30.408851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.159 [2024-11-17 01:37:30.408859] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.159 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.159 "name": "raid_bdev1", 00:17:22.159 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:22.159 "strip_size_kb": 64, 00:17:22.159 "state": "online", 00:17:22.159 "raid_level": "raid5f", 00:17:22.159 "superblock": true, 00:17:22.159 "num_base_bdevs": 4, 00:17:22.159 "num_base_bdevs_discovered": 3, 00:17:22.159 "num_base_bdevs_operational": 3, 00:17:22.159 "base_bdevs_list": [ 00:17:22.159 { 00:17:22.159 "name": null, 00:17:22.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.159 "is_configured": false, 00:17:22.159 "data_offset": 0, 00:17:22.159 "data_size": 63488 00:17:22.159 }, 00:17:22.160 { 00:17:22.160 "name": "BaseBdev2", 00:17:22.160 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:22.160 "is_configured": true, 00:17:22.160 "data_offset": 2048, 00:17:22.160 "data_size": 63488 00:17:22.160 }, 00:17:22.160 { 00:17:22.160 "name": "BaseBdev3", 00:17:22.160 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:22.160 "is_configured": true, 00:17:22.160 "data_offset": 2048, 00:17:22.160 "data_size": 63488 00:17:22.160 }, 00:17:22.160 { 00:17:22.160 "name": "BaseBdev4", 00:17:22.160 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:22.160 "is_configured": true, 00:17:22.160 "data_offset": 2048, 00:17:22.160 "data_size": 63488 00:17:22.160 } 00:17:22.160 ] 00:17:22.160 }' 00:17:22.160 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.160 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.730 "name": "raid_bdev1", 00:17:22.730 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:22.730 "strip_size_kb": 64, 00:17:22.730 "state": "online", 00:17:22.730 "raid_level": "raid5f", 00:17:22.730 "superblock": true, 00:17:22.730 "num_base_bdevs": 4, 00:17:22.730 "num_base_bdevs_discovered": 3, 00:17:22.730 "num_base_bdevs_operational": 3, 00:17:22.730 "base_bdevs_list": [ 00:17:22.730 { 00:17:22.730 "name": null, 00:17:22.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.730 "is_configured": false, 00:17:22.730 "data_offset": 0, 00:17:22.730 "data_size": 63488 00:17:22.730 }, 00:17:22.730 { 00:17:22.730 "name": "BaseBdev2", 00:17:22.730 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:22.730 "is_configured": true, 00:17:22.730 "data_offset": 2048, 00:17:22.730 "data_size": 63488 00:17:22.730 }, 00:17:22.730 { 00:17:22.730 "name": "BaseBdev3", 00:17:22.730 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:22.730 "is_configured": true, 00:17:22.730 "data_offset": 2048, 00:17:22.730 "data_size": 63488 00:17:22.730 }, 00:17:22.730 { 00:17:22.730 "name": "BaseBdev4", 00:17:22.730 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:22.730 "is_configured": true, 00:17:22.730 "data_offset": 2048, 00:17:22.730 "data_size": 63488 00:17:22.730 } 00:17:22.730 ] 00:17:22.730 }' 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.730 01:37:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.730 01:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.730 01:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:22.730 01:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.730 01:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.730 [2024-11-17 01:37:31.063869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.730 [2024-11-17 01:37:31.077454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:22.730 01:37:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.731 01:37:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:22.731 [2024-11-17 01:37:31.086157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.671 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.931 "name": "raid_bdev1", 00:17:23.931 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:23.931 "strip_size_kb": 64, 00:17:23.931 "state": "online", 00:17:23.931 "raid_level": "raid5f", 00:17:23.931 "superblock": true, 00:17:23.931 "num_base_bdevs": 4, 00:17:23.931 "num_base_bdevs_discovered": 4, 00:17:23.931 "num_base_bdevs_operational": 4, 00:17:23.931 "process": { 00:17:23.931 "type": "rebuild", 00:17:23.931 "target": "spare", 00:17:23.931 "progress": { 00:17:23.931 "blocks": 19200, 00:17:23.931 "percent": 10 00:17:23.931 } 00:17:23.931 }, 00:17:23.931 "base_bdevs_list": [ 00:17:23.931 { 00:17:23.931 "name": "spare", 00:17:23.931 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 }, 00:17:23.931 { 00:17:23.931 "name": "BaseBdev2", 00:17:23.931 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 }, 00:17:23.931 { 00:17:23.931 "name": "BaseBdev3", 00:17:23.931 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 }, 00:17:23.931 { 00:17:23.931 "name": "BaseBdev4", 00:17:23.931 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 } 00:17:23.931 ] 00:17:23.931 }' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:23.931 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=626 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.931 "name": "raid_bdev1", 00:17:23.931 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:23.931 "strip_size_kb": 64, 00:17:23.931 "state": "online", 00:17:23.931 "raid_level": "raid5f", 00:17:23.931 "superblock": true, 00:17:23.931 "num_base_bdevs": 4, 00:17:23.931 "num_base_bdevs_discovered": 4, 00:17:23.931 "num_base_bdevs_operational": 4, 00:17:23.931 "process": { 00:17:23.931 "type": "rebuild", 00:17:23.931 "target": "spare", 00:17:23.931 "progress": { 00:17:23.931 "blocks": 21120, 00:17:23.931 "percent": 11 00:17:23.931 } 00:17:23.931 }, 00:17:23.931 "base_bdevs_list": [ 00:17:23.931 { 00:17:23.931 "name": "spare", 00:17:23.931 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 }, 00:17:23.931 { 00:17:23.931 "name": "BaseBdev2", 00:17:23.931 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 }, 00:17:23.931 { 00:17:23.931 "name": "BaseBdev3", 00:17:23.931 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 }, 00:17:23.931 { 00:17:23.931 "name": "BaseBdev4", 00:17:23.931 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:23.931 "is_configured": true, 00:17:23.931 "data_offset": 2048, 00:17:23.931 "data_size": 63488 00:17:23.931 } 00:17:23.931 ] 00:17:23.931 }' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.931 01:37:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.316 "name": "raid_bdev1", 00:17:25.316 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:25.316 "strip_size_kb": 64, 00:17:25.316 "state": "online", 00:17:25.316 "raid_level": "raid5f", 00:17:25.316 "superblock": true, 00:17:25.316 "num_base_bdevs": 4, 00:17:25.316 "num_base_bdevs_discovered": 4, 00:17:25.316 "num_base_bdevs_operational": 4, 00:17:25.316 "process": { 00:17:25.316 "type": "rebuild", 00:17:25.316 "target": "spare", 00:17:25.316 "progress": { 00:17:25.316 "blocks": 42240, 00:17:25.316 "percent": 22 00:17:25.316 } 00:17:25.316 }, 00:17:25.316 "base_bdevs_list": [ 00:17:25.316 { 00:17:25.316 "name": "spare", 00:17:25.316 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:25.316 "is_configured": true, 00:17:25.316 "data_offset": 2048, 00:17:25.316 "data_size": 63488 00:17:25.316 }, 00:17:25.316 { 00:17:25.316 "name": "BaseBdev2", 00:17:25.316 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:25.316 "is_configured": true, 00:17:25.316 "data_offset": 2048, 00:17:25.316 "data_size": 63488 00:17:25.316 }, 00:17:25.316 { 00:17:25.316 "name": "BaseBdev3", 00:17:25.316 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:25.316 "is_configured": true, 00:17:25.316 "data_offset": 2048, 00:17:25.316 "data_size": 63488 00:17:25.316 }, 00:17:25.316 { 00:17:25.316 "name": "BaseBdev4", 00:17:25.316 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:25.316 "is_configured": true, 00:17:25.316 "data_offset": 2048, 00:17:25.316 "data_size": 63488 00:17:25.316 } 00:17:25.316 ] 00:17:25.316 }' 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.316 01:37:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.257 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.257 "name": "raid_bdev1", 00:17:26.257 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:26.257 "strip_size_kb": 64, 00:17:26.257 "state": "online", 00:17:26.257 "raid_level": "raid5f", 00:17:26.257 "superblock": true, 00:17:26.257 "num_base_bdevs": 4, 00:17:26.257 "num_base_bdevs_discovered": 4, 00:17:26.257 "num_base_bdevs_operational": 4, 00:17:26.257 "process": { 00:17:26.257 "type": "rebuild", 00:17:26.257 "target": "spare", 00:17:26.257 "progress": { 00:17:26.257 "blocks": 65280, 00:17:26.257 "percent": 34 00:17:26.257 } 00:17:26.257 }, 00:17:26.257 "base_bdevs_list": [ 00:17:26.257 { 00:17:26.257 "name": "spare", 00:17:26.257 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:26.257 "is_configured": true, 00:17:26.257 "data_offset": 2048, 00:17:26.258 "data_size": 63488 00:17:26.258 }, 00:17:26.258 { 00:17:26.258 "name": "BaseBdev2", 00:17:26.258 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:26.258 "is_configured": true, 00:17:26.258 "data_offset": 2048, 00:17:26.258 "data_size": 63488 00:17:26.258 }, 00:17:26.258 { 00:17:26.258 "name": "BaseBdev3", 00:17:26.258 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:26.258 "is_configured": true, 00:17:26.258 "data_offset": 2048, 00:17:26.258 "data_size": 63488 00:17:26.258 }, 00:17:26.258 { 00:17:26.258 "name": "BaseBdev4", 00:17:26.258 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:26.258 "is_configured": true, 00:17:26.258 "data_offset": 2048, 00:17:26.258 "data_size": 63488 00:17:26.258 } 00:17:26.258 ] 00:17:26.258 }' 00:17:26.258 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.258 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.258 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.258 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.258 01:37:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.641 "name": "raid_bdev1", 00:17:27.641 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:27.641 "strip_size_kb": 64, 00:17:27.641 "state": "online", 00:17:27.641 "raid_level": "raid5f", 00:17:27.641 "superblock": true, 00:17:27.641 "num_base_bdevs": 4, 00:17:27.641 "num_base_bdevs_discovered": 4, 00:17:27.641 "num_base_bdevs_operational": 4, 00:17:27.641 "process": { 00:17:27.641 "type": "rebuild", 00:17:27.641 "target": "spare", 00:17:27.641 "progress": { 00:17:27.641 "blocks": 86400, 00:17:27.641 "percent": 45 00:17:27.641 } 00:17:27.641 }, 00:17:27.641 "base_bdevs_list": [ 00:17:27.641 { 00:17:27.641 "name": "spare", 00:17:27.641 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:27.641 "is_configured": true, 00:17:27.641 "data_offset": 2048, 00:17:27.641 "data_size": 63488 00:17:27.641 }, 00:17:27.641 { 00:17:27.641 "name": "BaseBdev2", 00:17:27.641 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:27.641 "is_configured": true, 00:17:27.641 "data_offset": 2048, 00:17:27.641 "data_size": 63488 00:17:27.641 }, 00:17:27.641 { 00:17:27.641 "name": "BaseBdev3", 00:17:27.641 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:27.641 "is_configured": true, 00:17:27.641 "data_offset": 2048, 00:17:27.641 "data_size": 63488 00:17:27.641 }, 00:17:27.641 { 00:17:27.641 "name": "BaseBdev4", 00:17:27.641 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:27.641 "is_configured": true, 00:17:27.641 "data_offset": 2048, 00:17:27.641 "data_size": 63488 00:17:27.641 } 00:17:27.641 ] 00:17:27.641 }' 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.641 01:37:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.582 "name": "raid_bdev1", 00:17:28.582 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:28.582 "strip_size_kb": 64, 00:17:28.582 "state": "online", 00:17:28.582 "raid_level": "raid5f", 00:17:28.582 "superblock": true, 00:17:28.582 "num_base_bdevs": 4, 00:17:28.582 "num_base_bdevs_discovered": 4, 00:17:28.582 "num_base_bdevs_operational": 4, 00:17:28.582 "process": { 00:17:28.582 "type": "rebuild", 00:17:28.582 "target": "spare", 00:17:28.582 "progress": { 00:17:28.582 "blocks": 109440, 00:17:28.582 "percent": 57 00:17:28.582 } 00:17:28.582 }, 00:17:28.582 "base_bdevs_list": [ 00:17:28.582 { 00:17:28.582 "name": "spare", 00:17:28.582 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:28.582 "is_configured": true, 00:17:28.582 "data_offset": 2048, 00:17:28.582 "data_size": 63488 00:17:28.582 }, 00:17:28.582 { 00:17:28.582 "name": "BaseBdev2", 00:17:28.582 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:28.582 "is_configured": true, 00:17:28.582 "data_offset": 2048, 00:17:28.582 "data_size": 63488 00:17:28.582 }, 00:17:28.582 { 00:17:28.582 "name": "BaseBdev3", 00:17:28.582 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:28.582 "is_configured": true, 00:17:28.582 "data_offset": 2048, 00:17:28.582 "data_size": 63488 00:17:28.582 }, 00:17:28.582 { 00:17:28.582 "name": "BaseBdev4", 00:17:28.582 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:28.582 "is_configured": true, 00:17:28.582 "data_offset": 2048, 00:17:28.582 "data_size": 63488 00:17:28.582 } 00:17:28.582 ] 00:17:28.582 }' 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.582 01:37:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.965 01:37:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.965 "name": "raid_bdev1", 00:17:29.965 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:29.965 "strip_size_kb": 64, 00:17:29.965 "state": "online", 00:17:29.965 "raid_level": "raid5f", 00:17:29.965 "superblock": true, 00:17:29.965 "num_base_bdevs": 4, 00:17:29.965 "num_base_bdevs_discovered": 4, 00:17:29.965 "num_base_bdevs_operational": 4, 00:17:29.965 "process": { 00:17:29.965 "type": "rebuild", 00:17:29.965 "target": "spare", 00:17:29.965 "progress": { 00:17:29.965 "blocks": 132480, 00:17:29.965 "percent": 69 00:17:29.965 } 00:17:29.965 }, 00:17:29.965 "base_bdevs_list": [ 00:17:29.965 { 00:17:29.965 "name": "spare", 00:17:29.965 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:29.965 "is_configured": true, 00:17:29.965 "data_offset": 2048, 00:17:29.965 "data_size": 63488 00:17:29.965 }, 00:17:29.965 { 00:17:29.965 "name": "BaseBdev2", 00:17:29.965 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:29.965 "is_configured": true, 00:17:29.965 "data_offset": 2048, 00:17:29.965 "data_size": 63488 00:17:29.965 }, 00:17:29.965 { 00:17:29.965 "name": "BaseBdev3", 00:17:29.965 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:29.965 "is_configured": true, 00:17:29.965 "data_offset": 2048, 00:17:29.965 "data_size": 63488 00:17:29.965 }, 00:17:29.965 { 00:17:29.965 "name": "BaseBdev4", 00:17:29.965 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:29.965 "is_configured": true, 00:17:29.965 "data_offset": 2048, 00:17:29.965 "data_size": 63488 00:17:29.965 } 00:17:29.965 ] 00:17:29.965 }' 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.965 01:37:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.904 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.905 "name": "raid_bdev1", 00:17:30.905 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:30.905 "strip_size_kb": 64, 00:17:30.905 "state": "online", 00:17:30.905 "raid_level": "raid5f", 00:17:30.905 "superblock": true, 00:17:30.905 "num_base_bdevs": 4, 00:17:30.905 "num_base_bdevs_discovered": 4, 00:17:30.905 "num_base_bdevs_operational": 4, 00:17:30.905 "process": { 00:17:30.905 "type": "rebuild", 00:17:30.905 "target": "spare", 00:17:30.905 "progress": { 00:17:30.905 "blocks": 153600, 00:17:30.905 "percent": 80 00:17:30.905 } 00:17:30.905 }, 00:17:30.905 "base_bdevs_list": [ 00:17:30.905 { 00:17:30.905 "name": "spare", 00:17:30.905 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:30.905 "is_configured": true, 00:17:30.905 "data_offset": 2048, 00:17:30.905 "data_size": 63488 00:17:30.905 }, 00:17:30.905 { 00:17:30.905 "name": "BaseBdev2", 00:17:30.905 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:30.905 "is_configured": true, 00:17:30.905 "data_offset": 2048, 00:17:30.905 "data_size": 63488 00:17:30.905 }, 00:17:30.905 { 00:17:30.905 "name": "BaseBdev3", 00:17:30.905 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:30.905 "is_configured": true, 00:17:30.905 "data_offset": 2048, 00:17:30.905 "data_size": 63488 00:17:30.905 }, 00:17:30.905 { 00:17:30.905 "name": "BaseBdev4", 00:17:30.905 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:30.905 "is_configured": true, 00:17:30.905 "data_offset": 2048, 00:17:30.905 "data_size": 63488 00:17:30.905 } 00:17:30.905 ] 00:17:30.905 }' 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.905 01:37:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.284 "name": "raid_bdev1", 00:17:32.284 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:32.284 "strip_size_kb": 64, 00:17:32.284 "state": "online", 00:17:32.284 "raid_level": "raid5f", 00:17:32.284 "superblock": true, 00:17:32.284 "num_base_bdevs": 4, 00:17:32.284 "num_base_bdevs_discovered": 4, 00:17:32.284 "num_base_bdevs_operational": 4, 00:17:32.284 "process": { 00:17:32.284 "type": "rebuild", 00:17:32.284 "target": "spare", 00:17:32.284 "progress": { 00:17:32.284 "blocks": 176640, 00:17:32.284 "percent": 92 00:17:32.284 } 00:17:32.284 }, 00:17:32.284 "base_bdevs_list": [ 00:17:32.284 { 00:17:32.284 "name": "spare", 00:17:32.284 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:32.284 "is_configured": true, 00:17:32.284 "data_offset": 2048, 00:17:32.284 "data_size": 63488 00:17:32.284 }, 00:17:32.284 { 00:17:32.284 "name": "BaseBdev2", 00:17:32.284 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:32.284 "is_configured": true, 00:17:32.284 "data_offset": 2048, 00:17:32.284 "data_size": 63488 00:17:32.284 }, 00:17:32.284 { 00:17:32.284 "name": "BaseBdev3", 00:17:32.284 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:32.284 "is_configured": true, 00:17:32.284 "data_offset": 2048, 00:17:32.284 "data_size": 63488 00:17:32.284 }, 00:17:32.284 { 00:17:32.284 "name": "BaseBdev4", 00:17:32.284 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:32.284 "is_configured": true, 00:17:32.284 "data_offset": 2048, 00:17:32.284 "data_size": 63488 00:17:32.284 } 00:17:32.284 ] 00:17:32.284 }' 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.284 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.285 01:37:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.855 [2024-11-17 01:37:41.126375] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:32.855 [2024-11-17 01:37:41.126493] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:32.855 [2024-11-17 01:37:41.126622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.114 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.114 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.115 "name": "raid_bdev1", 00:17:33.115 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:33.115 "strip_size_kb": 64, 00:17:33.115 "state": "online", 00:17:33.115 "raid_level": "raid5f", 00:17:33.115 "superblock": true, 00:17:33.115 "num_base_bdevs": 4, 00:17:33.115 "num_base_bdevs_discovered": 4, 00:17:33.115 "num_base_bdevs_operational": 4, 00:17:33.115 "base_bdevs_list": [ 00:17:33.115 { 00:17:33.115 "name": "spare", 00:17:33.115 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:33.115 "is_configured": true, 00:17:33.115 "data_offset": 2048, 00:17:33.115 "data_size": 63488 00:17:33.115 }, 00:17:33.115 { 00:17:33.115 "name": "BaseBdev2", 00:17:33.115 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:33.115 "is_configured": true, 00:17:33.115 "data_offset": 2048, 00:17:33.115 "data_size": 63488 00:17:33.115 }, 00:17:33.115 { 00:17:33.115 "name": "BaseBdev3", 00:17:33.115 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:33.115 "is_configured": true, 00:17:33.115 "data_offset": 2048, 00:17:33.115 "data_size": 63488 00:17:33.115 }, 00:17:33.115 { 00:17:33.115 "name": "BaseBdev4", 00:17:33.115 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:33.115 "is_configured": true, 00:17:33.115 "data_offset": 2048, 00:17:33.115 "data_size": 63488 00:17:33.115 } 00:17:33.115 ] 00:17:33.115 }' 00:17:33.115 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.375 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.376 "name": "raid_bdev1", 00:17:33.376 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:33.376 "strip_size_kb": 64, 00:17:33.376 "state": "online", 00:17:33.376 "raid_level": "raid5f", 00:17:33.376 "superblock": true, 00:17:33.376 "num_base_bdevs": 4, 00:17:33.376 "num_base_bdevs_discovered": 4, 00:17:33.376 "num_base_bdevs_operational": 4, 00:17:33.376 "base_bdevs_list": [ 00:17:33.376 { 00:17:33.376 "name": "spare", 00:17:33.376 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 }, 00:17:33.376 { 00:17:33.376 "name": "BaseBdev2", 00:17:33.376 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 }, 00:17:33.376 { 00:17:33.376 "name": "BaseBdev3", 00:17:33.376 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 }, 00:17:33.376 { 00:17:33.376 "name": "BaseBdev4", 00:17:33.376 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 } 00:17:33.376 ] 00:17:33.376 }' 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.376 "name": "raid_bdev1", 00:17:33.376 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:33.376 "strip_size_kb": 64, 00:17:33.376 "state": "online", 00:17:33.376 "raid_level": "raid5f", 00:17:33.376 "superblock": true, 00:17:33.376 "num_base_bdevs": 4, 00:17:33.376 "num_base_bdevs_discovered": 4, 00:17:33.376 "num_base_bdevs_operational": 4, 00:17:33.376 "base_bdevs_list": [ 00:17:33.376 { 00:17:33.376 "name": "spare", 00:17:33.376 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 }, 00:17:33.376 { 00:17:33.376 "name": "BaseBdev2", 00:17:33.376 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 }, 00:17:33.376 { 00:17:33.376 "name": "BaseBdev3", 00:17:33.376 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 }, 00:17:33.376 { 00:17:33.376 "name": "BaseBdev4", 00:17:33.376 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:33.376 "is_configured": true, 00:17:33.376 "data_offset": 2048, 00:17:33.376 "data_size": 63488 00:17:33.376 } 00:17:33.376 ] 00:17:33.376 }' 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.376 01:37:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.947 [2024-11-17 01:37:42.177276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.947 [2024-11-17 01:37:42.177304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.947 [2024-11-17 01:37:42.177375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.947 [2024-11-17 01:37:42.177459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.947 [2024-11-17 01:37:42.177478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.947 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:34.207 /dev/nbd0 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:34.207 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.208 1+0 records in 00:17:34.208 1+0 records out 00:17:34.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482822 s, 8.5 MB/s 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.208 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:34.468 /dev/nbd1 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.468 1+0 records in 00:17:34.468 1+0 records out 00:17:34.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366272 s, 11.2 MB/s 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.468 01:37:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.728 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.989 [2024-11-17 01:37:43.390713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.989 [2024-11-17 01:37:43.390778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.989 [2024-11-17 01:37:43.390801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:34.989 [2024-11-17 01:37:43.390810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.989 [2024-11-17 01:37:43.392933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.989 [2024-11-17 01:37:43.392971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.989 [2024-11-17 01:37:43.393058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.989 [2024-11-17 01:37:43.393113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.989 [2024-11-17 01:37:43.393254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.989 [2024-11-17 01:37:43.393348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.989 [2024-11-17 01:37:43.393447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.989 spare 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.989 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.249 [2024-11-17 01:37:43.493381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:35.249 [2024-11-17 01:37:43.493411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:35.249 [2024-11-17 01:37:43.493637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:35.249 [2024-11-17 01:37:43.500293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:35.249 [2024-11-17 01:37:43.500312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:35.249 [2024-11-17 01:37:43.500477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.249 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.249 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.250 "name": "raid_bdev1", 00:17:35.250 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:35.250 "strip_size_kb": 64, 00:17:35.250 "state": "online", 00:17:35.250 "raid_level": "raid5f", 00:17:35.250 "superblock": true, 00:17:35.250 "num_base_bdevs": 4, 00:17:35.250 "num_base_bdevs_discovered": 4, 00:17:35.250 "num_base_bdevs_operational": 4, 00:17:35.250 "base_bdevs_list": [ 00:17:35.250 { 00:17:35.250 "name": "spare", 00:17:35.250 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:35.250 "is_configured": true, 00:17:35.250 "data_offset": 2048, 00:17:35.250 "data_size": 63488 00:17:35.250 }, 00:17:35.250 { 00:17:35.250 "name": "BaseBdev2", 00:17:35.250 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:35.250 "is_configured": true, 00:17:35.250 "data_offset": 2048, 00:17:35.250 "data_size": 63488 00:17:35.250 }, 00:17:35.250 { 00:17:35.250 "name": "BaseBdev3", 00:17:35.250 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:35.250 "is_configured": true, 00:17:35.250 "data_offset": 2048, 00:17:35.250 "data_size": 63488 00:17:35.250 }, 00:17:35.250 { 00:17:35.250 "name": "BaseBdev4", 00:17:35.250 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:35.250 "is_configured": true, 00:17:35.250 "data_offset": 2048, 00:17:35.250 "data_size": 63488 00:17:35.250 } 00:17:35.250 ] 00:17:35.250 }' 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.250 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.510 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.511 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.771 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.771 "name": "raid_bdev1", 00:17:35.771 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:35.771 "strip_size_kb": 64, 00:17:35.771 "state": "online", 00:17:35.771 "raid_level": "raid5f", 00:17:35.771 "superblock": true, 00:17:35.771 "num_base_bdevs": 4, 00:17:35.771 "num_base_bdevs_discovered": 4, 00:17:35.771 "num_base_bdevs_operational": 4, 00:17:35.771 "base_bdevs_list": [ 00:17:35.771 { 00:17:35.771 "name": "spare", 00:17:35.771 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:35.771 "is_configured": true, 00:17:35.771 "data_offset": 2048, 00:17:35.771 "data_size": 63488 00:17:35.771 }, 00:17:35.771 { 00:17:35.771 "name": "BaseBdev2", 00:17:35.771 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:35.771 "is_configured": true, 00:17:35.771 "data_offset": 2048, 00:17:35.771 "data_size": 63488 00:17:35.771 }, 00:17:35.771 { 00:17:35.771 "name": "BaseBdev3", 00:17:35.771 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:35.771 "is_configured": true, 00:17:35.771 "data_offset": 2048, 00:17:35.771 "data_size": 63488 00:17:35.771 }, 00:17:35.771 { 00:17:35.771 "name": "BaseBdev4", 00:17:35.771 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:35.771 "is_configured": true, 00:17:35.771 "data_offset": 2048, 00:17:35.771 "data_size": 63488 00:17:35.771 } 00:17:35.771 ] 00:17:35.771 }' 00:17:35.771 01:37:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.771 [2024-11-17 01:37:44.119093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.771 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.771 "name": "raid_bdev1", 00:17:35.771 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:35.771 "strip_size_kb": 64, 00:17:35.771 "state": "online", 00:17:35.771 "raid_level": "raid5f", 00:17:35.771 "superblock": true, 00:17:35.771 "num_base_bdevs": 4, 00:17:35.771 "num_base_bdevs_discovered": 3, 00:17:35.771 "num_base_bdevs_operational": 3, 00:17:35.771 "base_bdevs_list": [ 00:17:35.771 { 00:17:35.772 "name": null, 00:17:35.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.772 "is_configured": false, 00:17:35.772 "data_offset": 0, 00:17:35.772 "data_size": 63488 00:17:35.772 }, 00:17:35.772 { 00:17:35.772 "name": "BaseBdev2", 00:17:35.772 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:35.772 "is_configured": true, 00:17:35.772 "data_offset": 2048, 00:17:35.772 "data_size": 63488 00:17:35.772 }, 00:17:35.772 { 00:17:35.772 "name": "BaseBdev3", 00:17:35.772 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:35.772 "is_configured": true, 00:17:35.772 "data_offset": 2048, 00:17:35.772 "data_size": 63488 00:17:35.772 }, 00:17:35.772 { 00:17:35.772 "name": "BaseBdev4", 00:17:35.772 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:35.772 "is_configured": true, 00:17:35.772 "data_offset": 2048, 00:17:35.772 "data_size": 63488 00:17:35.772 } 00:17:35.772 ] 00:17:35.772 }' 00:17:35.772 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.772 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.342 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.342 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.342 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.342 [2024-11-17 01:37:44.574344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.342 [2024-11-17 01:37:44.574536] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:36.342 [2024-11-17 01:37:44.574575] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:36.342 [2024-11-17 01:37:44.574608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.342 [2024-11-17 01:37:44.587892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:36.342 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.342 01:37:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:36.342 [2024-11-17 01:37:44.596111] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.282 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.282 "name": "raid_bdev1", 00:17:37.282 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:37.282 "strip_size_kb": 64, 00:17:37.282 "state": "online", 00:17:37.282 "raid_level": "raid5f", 00:17:37.282 "superblock": true, 00:17:37.282 "num_base_bdevs": 4, 00:17:37.282 "num_base_bdevs_discovered": 4, 00:17:37.282 "num_base_bdevs_operational": 4, 00:17:37.282 "process": { 00:17:37.282 "type": "rebuild", 00:17:37.282 "target": "spare", 00:17:37.282 "progress": { 00:17:37.282 "blocks": 19200, 00:17:37.282 "percent": 10 00:17:37.282 } 00:17:37.282 }, 00:17:37.282 "base_bdevs_list": [ 00:17:37.282 { 00:17:37.282 "name": "spare", 00:17:37.282 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:37.282 "is_configured": true, 00:17:37.282 "data_offset": 2048, 00:17:37.282 "data_size": 63488 00:17:37.282 }, 00:17:37.282 { 00:17:37.282 "name": "BaseBdev2", 00:17:37.282 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:37.282 "is_configured": true, 00:17:37.282 "data_offset": 2048, 00:17:37.282 "data_size": 63488 00:17:37.282 }, 00:17:37.282 { 00:17:37.282 "name": "BaseBdev3", 00:17:37.282 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:37.282 "is_configured": true, 00:17:37.283 "data_offset": 2048, 00:17:37.283 "data_size": 63488 00:17:37.283 }, 00:17:37.283 { 00:17:37.283 "name": "BaseBdev4", 00:17:37.283 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:37.283 "is_configured": true, 00:17:37.283 "data_offset": 2048, 00:17:37.283 "data_size": 63488 00:17:37.283 } 00:17:37.283 ] 00:17:37.283 }' 00:17:37.283 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.283 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.283 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.283 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.283 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.283 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.283 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.283 [2024-11-17 01:37:45.739308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.543 [2024-11-17 01:37:45.801402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.543 [2024-11-17 01:37:45.801512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.543 [2024-11-17 01:37:45.801528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.543 [2024-11-17 01:37:45.801538] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.543 "name": "raid_bdev1", 00:17:37.543 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:37.543 "strip_size_kb": 64, 00:17:37.543 "state": "online", 00:17:37.543 "raid_level": "raid5f", 00:17:37.543 "superblock": true, 00:17:37.543 "num_base_bdevs": 4, 00:17:37.543 "num_base_bdevs_discovered": 3, 00:17:37.543 "num_base_bdevs_operational": 3, 00:17:37.543 "base_bdevs_list": [ 00:17:37.543 { 00:17:37.543 "name": null, 00:17:37.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.543 "is_configured": false, 00:17:37.543 "data_offset": 0, 00:17:37.543 "data_size": 63488 00:17:37.543 }, 00:17:37.543 { 00:17:37.543 "name": "BaseBdev2", 00:17:37.543 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:37.543 "is_configured": true, 00:17:37.543 "data_offset": 2048, 00:17:37.543 "data_size": 63488 00:17:37.543 }, 00:17:37.543 { 00:17:37.543 "name": "BaseBdev3", 00:17:37.543 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:37.543 "is_configured": true, 00:17:37.543 "data_offset": 2048, 00:17:37.543 "data_size": 63488 00:17:37.543 }, 00:17:37.543 { 00:17:37.543 "name": "BaseBdev4", 00:17:37.543 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:37.543 "is_configured": true, 00:17:37.543 "data_offset": 2048, 00:17:37.543 "data_size": 63488 00:17:37.543 } 00:17:37.543 ] 00:17:37.543 }' 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.543 01:37:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.804 01:37:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.804 01:37:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.804 01:37:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.804 [2024-11-17 01:37:46.257354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.804 [2024-11-17 01:37:46.257461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.804 [2024-11-17 01:37:46.257504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:37.804 [2024-11-17 01:37:46.257535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.804 [2024-11-17 01:37:46.258057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.804 [2024-11-17 01:37:46.258123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.804 [2024-11-17 01:37:46.258237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.804 [2024-11-17 01:37:46.258280] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.804 [2024-11-17 01:37:46.258329] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:37.804 [2024-11-17 01:37:46.258393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.064 [2024-11-17 01:37:46.271684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:38.064 spare 00:17:38.064 01:37:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.064 01:37:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:38.064 [2024-11-17 01:37:46.279798] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.003 "name": "raid_bdev1", 00:17:39.003 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:39.003 "strip_size_kb": 64, 00:17:39.003 "state": "online", 00:17:39.003 "raid_level": "raid5f", 00:17:39.003 "superblock": true, 00:17:39.003 "num_base_bdevs": 4, 00:17:39.003 "num_base_bdevs_discovered": 4, 00:17:39.003 "num_base_bdevs_operational": 4, 00:17:39.003 "process": { 00:17:39.003 "type": "rebuild", 00:17:39.003 "target": "spare", 00:17:39.003 "progress": { 00:17:39.003 "blocks": 19200, 00:17:39.003 "percent": 10 00:17:39.003 } 00:17:39.003 }, 00:17:39.003 "base_bdevs_list": [ 00:17:39.003 { 00:17:39.003 "name": "spare", 00:17:39.003 "uuid": "60249581-e765-5811-ae81-283070fa7349", 00:17:39.003 "is_configured": true, 00:17:39.003 "data_offset": 2048, 00:17:39.003 "data_size": 63488 00:17:39.003 }, 00:17:39.003 { 00:17:39.003 "name": "BaseBdev2", 00:17:39.003 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:39.003 "is_configured": true, 00:17:39.003 "data_offset": 2048, 00:17:39.003 "data_size": 63488 00:17:39.003 }, 00:17:39.003 { 00:17:39.003 "name": "BaseBdev3", 00:17:39.003 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:39.003 "is_configured": true, 00:17:39.003 "data_offset": 2048, 00:17:39.003 "data_size": 63488 00:17:39.003 }, 00:17:39.003 { 00:17:39.003 "name": "BaseBdev4", 00:17:39.003 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:39.003 "is_configured": true, 00:17:39.003 "data_offset": 2048, 00:17:39.003 "data_size": 63488 00:17:39.003 } 00:17:39.003 ] 00:17:39.003 }' 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.003 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.003 [2024-11-17 01:37:47.422357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.284 [2024-11-17 01:37:47.485123] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.284 [2024-11-17 01:37:47.485173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.284 [2024-11-17 01:37:47.485191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.284 [2024-11-17 01:37:47.485198] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.284 "name": "raid_bdev1", 00:17:39.284 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:39.284 "strip_size_kb": 64, 00:17:39.284 "state": "online", 00:17:39.284 "raid_level": "raid5f", 00:17:39.284 "superblock": true, 00:17:39.284 "num_base_bdevs": 4, 00:17:39.284 "num_base_bdevs_discovered": 3, 00:17:39.284 "num_base_bdevs_operational": 3, 00:17:39.284 "base_bdevs_list": [ 00:17:39.284 { 00:17:39.284 "name": null, 00:17:39.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.284 "is_configured": false, 00:17:39.284 "data_offset": 0, 00:17:39.284 "data_size": 63488 00:17:39.284 }, 00:17:39.284 { 00:17:39.284 "name": "BaseBdev2", 00:17:39.284 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:39.284 "is_configured": true, 00:17:39.284 "data_offset": 2048, 00:17:39.284 "data_size": 63488 00:17:39.284 }, 00:17:39.284 { 00:17:39.284 "name": "BaseBdev3", 00:17:39.284 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:39.284 "is_configured": true, 00:17:39.284 "data_offset": 2048, 00:17:39.284 "data_size": 63488 00:17:39.284 }, 00:17:39.284 { 00:17:39.284 "name": "BaseBdev4", 00:17:39.284 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:39.284 "is_configured": true, 00:17:39.284 "data_offset": 2048, 00:17:39.284 "data_size": 63488 00:17:39.284 } 00:17:39.284 ] 00:17:39.284 }' 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.284 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.594 01:37:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.594 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.594 "name": "raid_bdev1", 00:17:39.594 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:39.594 "strip_size_kb": 64, 00:17:39.594 "state": "online", 00:17:39.594 "raid_level": "raid5f", 00:17:39.594 "superblock": true, 00:17:39.594 "num_base_bdevs": 4, 00:17:39.594 "num_base_bdevs_discovered": 3, 00:17:39.594 "num_base_bdevs_operational": 3, 00:17:39.594 "base_bdevs_list": [ 00:17:39.594 { 00:17:39.594 "name": null, 00:17:39.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.594 "is_configured": false, 00:17:39.594 "data_offset": 0, 00:17:39.594 "data_size": 63488 00:17:39.594 }, 00:17:39.594 { 00:17:39.594 "name": "BaseBdev2", 00:17:39.594 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:39.594 "is_configured": true, 00:17:39.594 "data_offset": 2048, 00:17:39.594 "data_size": 63488 00:17:39.594 }, 00:17:39.594 { 00:17:39.594 "name": "BaseBdev3", 00:17:39.594 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:39.594 "is_configured": true, 00:17:39.594 "data_offset": 2048, 00:17:39.594 "data_size": 63488 00:17:39.594 }, 00:17:39.594 { 00:17:39.594 "name": "BaseBdev4", 00:17:39.594 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:39.594 "is_configured": true, 00:17:39.594 "data_offset": 2048, 00:17:39.594 "data_size": 63488 00:17:39.594 } 00:17:39.594 ] 00:17:39.594 }' 00:17:39.594 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.866 [2024-11-17 01:37:48.120296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.866 [2024-11-17 01:37:48.120406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.866 [2024-11-17 01:37:48.120431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:39.866 [2024-11-17 01:37:48.120441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.866 [2024-11-17 01:37:48.120905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.866 [2024-11-17 01:37:48.120925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.866 [2024-11-17 01:37:48.120999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:39.866 [2024-11-17 01:37:48.121012] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.866 [2024-11-17 01:37:48.121024] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.866 [2024-11-17 01:37:48.121034] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:39.866 BaseBdev1 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.866 01:37:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.807 "name": "raid_bdev1", 00:17:40.807 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:40.807 "strip_size_kb": 64, 00:17:40.807 "state": "online", 00:17:40.807 "raid_level": "raid5f", 00:17:40.807 "superblock": true, 00:17:40.807 "num_base_bdevs": 4, 00:17:40.807 "num_base_bdevs_discovered": 3, 00:17:40.807 "num_base_bdevs_operational": 3, 00:17:40.807 "base_bdevs_list": [ 00:17:40.807 { 00:17:40.807 "name": null, 00:17:40.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.807 "is_configured": false, 00:17:40.807 "data_offset": 0, 00:17:40.807 "data_size": 63488 00:17:40.807 }, 00:17:40.807 { 00:17:40.807 "name": "BaseBdev2", 00:17:40.807 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:40.807 "is_configured": true, 00:17:40.807 "data_offset": 2048, 00:17:40.807 "data_size": 63488 00:17:40.807 }, 00:17:40.807 { 00:17:40.807 "name": "BaseBdev3", 00:17:40.807 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:40.807 "is_configured": true, 00:17:40.807 "data_offset": 2048, 00:17:40.807 "data_size": 63488 00:17:40.807 }, 00:17:40.807 { 00:17:40.807 "name": "BaseBdev4", 00:17:40.807 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:40.807 "is_configured": true, 00:17:40.807 "data_offset": 2048, 00:17:40.807 "data_size": 63488 00:17:40.807 } 00:17:40.807 ] 00:17:40.807 }' 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.807 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.066 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.326 "name": "raid_bdev1", 00:17:41.326 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:41.326 "strip_size_kb": 64, 00:17:41.326 "state": "online", 00:17:41.326 "raid_level": "raid5f", 00:17:41.326 "superblock": true, 00:17:41.326 "num_base_bdevs": 4, 00:17:41.326 "num_base_bdevs_discovered": 3, 00:17:41.326 "num_base_bdevs_operational": 3, 00:17:41.326 "base_bdevs_list": [ 00:17:41.326 { 00:17:41.326 "name": null, 00:17:41.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.326 "is_configured": false, 00:17:41.326 "data_offset": 0, 00:17:41.326 "data_size": 63488 00:17:41.326 }, 00:17:41.326 { 00:17:41.326 "name": "BaseBdev2", 00:17:41.326 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:41.326 "is_configured": true, 00:17:41.326 "data_offset": 2048, 00:17:41.326 "data_size": 63488 00:17:41.326 }, 00:17:41.326 { 00:17:41.326 "name": "BaseBdev3", 00:17:41.326 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:41.326 "is_configured": true, 00:17:41.326 "data_offset": 2048, 00:17:41.326 "data_size": 63488 00:17:41.326 }, 00:17:41.326 { 00:17:41.326 "name": "BaseBdev4", 00:17:41.326 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:41.326 "is_configured": true, 00:17:41.326 "data_offset": 2048, 00:17:41.326 "data_size": 63488 00:17:41.326 } 00:17:41.326 ] 00:17:41.326 }' 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.326 [2024-11-17 01:37:49.657788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.326 [2024-11-17 01:37:49.657926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:41.326 [2024-11-17 01:37:49.657942] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:41.326 request: 00:17:41.326 { 00:17:41.326 "base_bdev": "BaseBdev1", 00:17:41.326 "raid_bdev": "raid_bdev1", 00:17:41.326 "method": "bdev_raid_add_base_bdev", 00:17:41.326 "req_id": 1 00:17:41.326 } 00:17:41.326 Got JSON-RPC error response 00:17:41.326 response: 00:17:41.326 { 00:17:41.326 "code": -22, 00:17:41.326 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:41.326 } 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:41.326 01:37:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.266 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.526 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.526 "name": "raid_bdev1", 00:17:42.526 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:42.526 "strip_size_kb": 64, 00:17:42.526 "state": "online", 00:17:42.526 "raid_level": "raid5f", 00:17:42.526 "superblock": true, 00:17:42.526 "num_base_bdevs": 4, 00:17:42.526 "num_base_bdevs_discovered": 3, 00:17:42.526 "num_base_bdevs_operational": 3, 00:17:42.526 "base_bdevs_list": [ 00:17:42.526 { 00:17:42.526 "name": null, 00:17:42.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.526 "is_configured": false, 00:17:42.526 "data_offset": 0, 00:17:42.526 "data_size": 63488 00:17:42.526 }, 00:17:42.526 { 00:17:42.526 "name": "BaseBdev2", 00:17:42.526 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:42.526 "is_configured": true, 00:17:42.526 "data_offset": 2048, 00:17:42.526 "data_size": 63488 00:17:42.526 }, 00:17:42.526 { 00:17:42.526 "name": "BaseBdev3", 00:17:42.526 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:42.526 "is_configured": true, 00:17:42.526 "data_offset": 2048, 00:17:42.526 "data_size": 63488 00:17:42.526 }, 00:17:42.526 { 00:17:42.526 "name": "BaseBdev4", 00:17:42.526 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:42.526 "is_configured": true, 00:17:42.526 "data_offset": 2048, 00:17:42.526 "data_size": 63488 00:17:42.526 } 00:17:42.526 ] 00:17:42.526 }' 00:17:42.526 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.526 01:37:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.787 "name": "raid_bdev1", 00:17:42.787 "uuid": "2c34846a-3a59-4a00-b8e2-2c7cf3c63a94", 00:17:42.787 "strip_size_kb": 64, 00:17:42.787 "state": "online", 00:17:42.787 "raid_level": "raid5f", 00:17:42.787 "superblock": true, 00:17:42.787 "num_base_bdevs": 4, 00:17:42.787 "num_base_bdevs_discovered": 3, 00:17:42.787 "num_base_bdevs_operational": 3, 00:17:42.787 "base_bdevs_list": [ 00:17:42.787 { 00:17:42.787 "name": null, 00:17:42.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.787 "is_configured": false, 00:17:42.787 "data_offset": 0, 00:17:42.787 "data_size": 63488 00:17:42.787 }, 00:17:42.787 { 00:17:42.787 "name": "BaseBdev2", 00:17:42.787 "uuid": "61e07aa5-764c-55eb-83e2-93192e9d6667", 00:17:42.787 "is_configured": true, 00:17:42.787 "data_offset": 2048, 00:17:42.787 "data_size": 63488 00:17:42.787 }, 00:17:42.787 { 00:17:42.787 "name": "BaseBdev3", 00:17:42.787 "uuid": "ff8228c8-b6f1-5072-b12f-c9ca7dfb0f5e", 00:17:42.787 "is_configured": true, 00:17:42.787 "data_offset": 2048, 00:17:42.787 "data_size": 63488 00:17:42.787 }, 00:17:42.787 { 00:17:42.787 "name": "BaseBdev4", 00:17:42.787 "uuid": "017fa81b-a721-58b4-b8e9-fc5189b4e0c5", 00:17:42.787 "is_configured": true, 00:17:42.787 "data_offset": 2048, 00:17:42.787 "data_size": 63488 00:17:42.787 } 00:17:42.787 ] 00:17:42.787 }' 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.787 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84853 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84853 ']' 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84853 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84853 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84853' 00:17:43.048 killing process with pid 84853 00:17:43.048 Received shutdown signal, test time was about 60.000000 seconds 00:17:43.048 00:17:43.048 Latency(us) 00:17:43.048 [2024-11-17T01:37:51.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.048 [2024-11-17T01:37:51.508Z] =================================================================================================================== 00:17:43.048 [2024-11-17T01:37:51.508Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84853 00:17:43.048 [2024-11-17 01:37:51.308934] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.048 [2024-11-17 01:37:51.309039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.048 [2024-11-17 01:37:51.309108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.048 [2024-11-17 01:37:51.309121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:43.048 01:37:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84853 00:17:43.308 [2024-11-17 01:37:51.764272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.692 01:37:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:44.692 00:17:44.692 real 0m26.831s 00:17:44.692 user 0m33.561s 00:17:44.692 sys 0m3.139s 00:17:44.692 01:37:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.692 01:37:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.692 ************************************ 00:17:44.692 END TEST raid5f_rebuild_test_sb 00:17:44.692 ************************************ 00:17:44.692 01:37:52 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:44.692 01:37:52 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:44.692 01:37:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:44.692 01:37:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.692 01:37:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.692 ************************************ 00:17:44.692 START TEST raid_state_function_test_sb_4k 00:17:44.692 ************************************ 00:17:44.692 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:44.692 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:44.692 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:44.692 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:44.692 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:44.692 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85665 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85665' 00:17:44.693 Process raid pid: 85665 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85665 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85665 ']' 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.693 01:37:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.693 [2024-11-17 01:37:52.950409] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:44.693 [2024-11-17 01:37:52.950615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.693 [2024-11-17 01:37:53.125336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.953 [2024-11-17 01:37:53.230576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.213 [2024-11-17 01:37:53.414240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.213 [2024-11-17 01:37:53.414352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.473 [2024-11-17 01:37:53.774419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.473 [2024-11-17 01:37:53.774473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.473 [2024-11-17 01:37:53.774482] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.473 [2024-11-17 01:37:53.774491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.473 "name": "Existed_Raid", 00:17:45.473 "uuid": "6f2691d6-a292-4aca-bd0e-2a1e48b92e22", 00:17:45.473 "strip_size_kb": 0, 00:17:45.473 "state": "configuring", 00:17:45.473 "raid_level": "raid1", 00:17:45.473 "superblock": true, 00:17:45.473 "num_base_bdevs": 2, 00:17:45.473 "num_base_bdevs_discovered": 0, 00:17:45.473 "num_base_bdevs_operational": 2, 00:17:45.473 "base_bdevs_list": [ 00:17:45.473 { 00:17:45.473 "name": "BaseBdev1", 00:17:45.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.473 "is_configured": false, 00:17:45.473 "data_offset": 0, 00:17:45.473 "data_size": 0 00:17:45.473 }, 00:17:45.473 { 00:17:45.473 "name": "BaseBdev2", 00:17:45.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.473 "is_configured": false, 00:17:45.473 "data_offset": 0, 00:17:45.473 "data_size": 0 00:17:45.473 } 00:17:45.473 ] 00:17:45.473 }' 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.473 01:37:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 [2024-11-17 01:37:54.233576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.045 [2024-11-17 01:37:54.233671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 [2024-11-17 01:37:54.245552] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.045 [2024-11-17 01:37:54.245594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.045 [2024-11-17 01:37:54.245602] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.045 [2024-11-17 01:37:54.245613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 [2024-11-17 01:37:54.288531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.045 BaseBdev1 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 [ 00:17:46.045 { 00:17:46.045 "name": "BaseBdev1", 00:17:46.045 "aliases": [ 00:17:46.045 "023dd14a-d32f-497f-8f44-14ebec112abc" 00:17:46.045 ], 00:17:46.045 "product_name": "Malloc disk", 00:17:46.045 "block_size": 4096, 00:17:46.045 "num_blocks": 8192, 00:17:46.045 "uuid": "023dd14a-d32f-497f-8f44-14ebec112abc", 00:17:46.045 "assigned_rate_limits": { 00:17:46.045 "rw_ios_per_sec": 0, 00:17:46.045 "rw_mbytes_per_sec": 0, 00:17:46.045 "r_mbytes_per_sec": 0, 00:17:46.045 "w_mbytes_per_sec": 0 00:17:46.045 }, 00:17:46.045 "claimed": true, 00:17:46.045 "claim_type": "exclusive_write", 00:17:46.045 "zoned": false, 00:17:46.045 "supported_io_types": { 00:17:46.045 "read": true, 00:17:46.045 "write": true, 00:17:46.045 "unmap": true, 00:17:46.045 "flush": true, 00:17:46.045 "reset": true, 00:17:46.045 "nvme_admin": false, 00:17:46.045 "nvme_io": false, 00:17:46.045 "nvme_io_md": false, 00:17:46.045 "write_zeroes": true, 00:17:46.045 "zcopy": true, 00:17:46.045 "get_zone_info": false, 00:17:46.045 "zone_management": false, 00:17:46.045 "zone_append": false, 00:17:46.045 "compare": false, 00:17:46.045 "compare_and_write": false, 00:17:46.045 "abort": true, 00:17:46.045 "seek_hole": false, 00:17:46.045 "seek_data": false, 00:17:46.045 "copy": true, 00:17:46.045 "nvme_iov_md": false 00:17:46.045 }, 00:17:46.045 "memory_domains": [ 00:17:46.045 { 00:17:46.045 "dma_device_id": "system", 00:17:46.045 "dma_device_type": 1 00:17:46.045 }, 00:17:46.045 { 00:17:46.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.045 "dma_device_type": 2 00:17:46.045 } 00:17:46.045 ], 00:17:46.045 "driver_specific": {} 00:17:46.045 } 00:17:46.045 ] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.045 "name": "Existed_Raid", 00:17:46.045 "uuid": "d2c0b39b-bde5-4902-bccf-644359206aa4", 00:17:46.045 "strip_size_kb": 0, 00:17:46.045 "state": "configuring", 00:17:46.045 "raid_level": "raid1", 00:17:46.045 "superblock": true, 00:17:46.045 "num_base_bdevs": 2, 00:17:46.045 "num_base_bdevs_discovered": 1, 00:17:46.045 "num_base_bdevs_operational": 2, 00:17:46.045 "base_bdevs_list": [ 00:17:46.045 { 00:17:46.045 "name": "BaseBdev1", 00:17:46.045 "uuid": "023dd14a-d32f-497f-8f44-14ebec112abc", 00:17:46.045 "is_configured": true, 00:17:46.045 "data_offset": 256, 00:17:46.045 "data_size": 7936 00:17:46.045 }, 00:17:46.045 { 00:17:46.045 "name": "BaseBdev2", 00:17:46.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.045 "is_configured": false, 00:17:46.045 "data_offset": 0, 00:17:46.045 "data_size": 0 00:17:46.045 } 00:17:46.045 ] 00:17:46.045 }' 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.045 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.306 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.306 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.566 [2024-11-17 01:37:54.767729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.566 [2024-11-17 01:37:54.767844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.566 [2024-11-17 01:37:54.779770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.566 [2024-11-17 01:37:54.781423] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.566 [2024-11-17 01:37:54.781522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.566 "name": "Existed_Raid", 00:17:46.566 "uuid": "70c7e3e8-ca85-4257-88bf-1a44cded44ef", 00:17:46.566 "strip_size_kb": 0, 00:17:46.566 "state": "configuring", 00:17:46.566 "raid_level": "raid1", 00:17:46.566 "superblock": true, 00:17:46.566 "num_base_bdevs": 2, 00:17:46.566 "num_base_bdevs_discovered": 1, 00:17:46.566 "num_base_bdevs_operational": 2, 00:17:46.566 "base_bdevs_list": [ 00:17:46.566 { 00:17:46.566 "name": "BaseBdev1", 00:17:46.566 "uuid": "023dd14a-d32f-497f-8f44-14ebec112abc", 00:17:46.566 "is_configured": true, 00:17:46.566 "data_offset": 256, 00:17:46.566 "data_size": 7936 00:17:46.566 }, 00:17:46.566 { 00:17:46.566 "name": "BaseBdev2", 00:17:46.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.566 "is_configured": false, 00:17:46.566 "data_offset": 0, 00:17:46.566 "data_size": 0 00:17:46.566 } 00:17:46.566 ] 00:17:46.566 }' 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.566 01:37:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.827 [2024-11-17 01:37:55.211832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.827 [2024-11-17 01:37:55.212163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.827 [2024-11-17 01:37:55.212217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.827 [2024-11-17 01:37:55.212490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:46.827 BaseBdev2 00:17:46.827 [2024-11-17 01:37:55.212675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.827 [2024-11-17 01:37:55.212691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:46.827 [2024-11-17 01:37:55.212842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.827 [ 00:17:46.827 { 00:17:46.827 "name": "BaseBdev2", 00:17:46.827 "aliases": [ 00:17:46.827 "903663d2-d8b5-4f4d-b6bf-3b6c1f856f4d" 00:17:46.827 ], 00:17:46.827 "product_name": "Malloc disk", 00:17:46.827 "block_size": 4096, 00:17:46.827 "num_blocks": 8192, 00:17:46.827 "uuid": "903663d2-d8b5-4f4d-b6bf-3b6c1f856f4d", 00:17:46.827 "assigned_rate_limits": { 00:17:46.827 "rw_ios_per_sec": 0, 00:17:46.827 "rw_mbytes_per_sec": 0, 00:17:46.827 "r_mbytes_per_sec": 0, 00:17:46.827 "w_mbytes_per_sec": 0 00:17:46.827 }, 00:17:46.827 "claimed": true, 00:17:46.827 "claim_type": "exclusive_write", 00:17:46.827 "zoned": false, 00:17:46.827 "supported_io_types": { 00:17:46.827 "read": true, 00:17:46.827 "write": true, 00:17:46.827 "unmap": true, 00:17:46.827 "flush": true, 00:17:46.827 "reset": true, 00:17:46.827 "nvme_admin": false, 00:17:46.827 "nvme_io": false, 00:17:46.827 "nvme_io_md": false, 00:17:46.827 "write_zeroes": true, 00:17:46.827 "zcopy": true, 00:17:46.827 "get_zone_info": false, 00:17:46.827 "zone_management": false, 00:17:46.827 "zone_append": false, 00:17:46.827 "compare": false, 00:17:46.827 "compare_and_write": false, 00:17:46.827 "abort": true, 00:17:46.827 "seek_hole": false, 00:17:46.827 "seek_data": false, 00:17:46.827 "copy": true, 00:17:46.827 "nvme_iov_md": false 00:17:46.827 }, 00:17:46.827 "memory_domains": [ 00:17:46.827 { 00:17:46.827 "dma_device_id": "system", 00:17:46.827 "dma_device_type": 1 00:17:46.827 }, 00:17:46.827 { 00:17:46.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.827 "dma_device_type": 2 00:17:46.827 } 00:17:46.827 ], 00:17:46.827 "driver_specific": {} 00:17:46.827 } 00:17:46.827 ] 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.827 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.087 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.087 "name": "Existed_Raid", 00:17:47.087 "uuid": "70c7e3e8-ca85-4257-88bf-1a44cded44ef", 00:17:47.087 "strip_size_kb": 0, 00:17:47.087 "state": "online", 00:17:47.087 "raid_level": "raid1", 00:17:47.087 "superblock": true, 00:17:47.087 "num_base_bdevs": 2, 00:17:47.087 "num_base_bdevs_discovered": 2, 00:17:47.087 "num_base_bdevs_operational": 2, 00:17:47.087 "base_bdevs_list": [ 00:17:47.087 { 00:17:47.087 "name": "BaseBdev1", 00:17:47.087 "uuid": "023dd14a-d32f-497f-8f44-14ebec112abc", 00:17:47.087 "is_configured": true, 00:17:47.087 "data_offset": 256, 00:17:47.087 "data_size": 7936 00:17:47.087 }, 00:17:47.087 { 00:17:47.087 "name": "BaseBdev2", 00:17:47.087 "uuid": "903663d2-d8b5-4f4d-b6bf-3b6c1f856f4d", 00:17:47.087 "is_configured": true, 00:17:47.087 "data_offset": 256, 00:17:47.087 "data_size": 7936 00:17:47.087 } 00:17:47.087 ] 00:17:47.087 }' 00:17:47.087 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.087 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.347 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.348 [2024-11-17 01:37:55.683297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.348 "name": "Existed_Raid", 00:17:47.348 "aliases": [ 00:17:47.348 "70c7e3e8-ca85-4257-88bf-1a44cded44ef" 00:17:47.348 ], 00:17:47.348 "product_name": "Raid Volume", 00:17:47.348 "block_size": 4096, 00:17:47.348 "num_blocks": 7936, 00:17:47.348 "uuid": "70c7e3e8-ca85-4257-88bf-1a44cded44ef", 00:17:47.348 "assigned_rate_limits": { 00:17:47.348 "rw_ios_per_sec": 0, 00:17:47.348 "rw_mbytes_per_sec": 0, 00:17:47.348 "r_mbytes_per_sec": 0, 00:17:47.348 "w_mbytes_per_sec": 0 00:17:47.348 }, 00:17:47.348 "claimed": false, 00:17:47.348 "zoned": false, 00:17:47.348 "supported_io_types": { 00:17:47.348 "read": true, 00:17:47.348 "write": true, 00:17:47.348 "unmap": false, 00:17:47.348 "flush": false, 00:17:47.348 "reset": true, 00:17:47.348 "nvme_admin": false, 00:17:47.348 "nvme_io": false, 00:17:47.348 "nvme_io_md": false, 00:17:47.348 "write_zeroes": true, 00:17:47.348 "zcopy": false, 00:17:47.348 "get_zone_info": false, 00:17:47.348 "zone_management": false, 00:17:47.348 "zone_append": false, 00:17:47.348 "compare": false, 00:17:47.348 "compare_and_write": false, 00:17:47.348 "abort": false, 00:17:47.348 "seek_hole": false, 00:17:47.348 "seek_data": false, 00:17:47.348 "copy": false, 00:17:47.348 "nvme_iov_md": false 00:17:47.348 }, 00:17:47.348 "memory_domains": [ 00:17:47.348 { 00:17:47.348 "dma_device_id": "system", 00:17:47.348 "dma_device_type": 1 00:17:47.348 }, 00:17:47.348 { 00:17:47.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.348 "dma_device_type": 2 00:17:47.348 }, 00:17:47.348 { 00:17:47.348 "dma_device_id": "system", 00:17:47.348 "dma_device_type": 1 00:17:47.348 }, 00:17:47.348 { 00:17:47.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.348 "dma_device_type": 2 00:17:47.348 } 00:17:47.348 ], 00:17:47.348 "driver_specific": { 00:17:47.348 "raid": { 00:17:47.348 "uuid": "70c7e3e8-ca85-4257-88bf-1a44cded44ef", 00:17:47.348 "strip_size_kb": 0, 00:17:47.348 "state": "online", 00:17:47.348 "raid_level": "raid1", 00:17:47.348 "superblock": true, 00:17:47.348 "num_base_bdevs": 2, 00:17:47.348 "num_base_bdevs_discovered": 2, 00:17:47.348 "num_base_bdevs_operational": 2, 00:17:47.348 "base_bdevs_list": [ 00:17:47.348 { 00:17:47.348 "name": "BaseBdev1", 00:17:47.348 "uuid": "023dd14a-d32f-497f-8f44-14ebec112abc", 00:17:47.348 "is_configured": true, 00:17:47.348 "data_offset": 256, 00:17:47.348 "data_size": 7936 00:17:47.348 }, 00:17:47.348 { 00:17:47.348 "name": "BaseBdev2", 00:17:47.348 "uuid": "903663d2-d8b5-4f4d-b6bf-3b6c1f856f4d", 00:17:47.348 "is_configured": true, 00:17:47.348 "data_offset": 256, 00:17:47.348 "data_size": 7936 00:17:47.348 } 00:17:47.348 ] 00:17:47.348 } 00:17:47.348 } 00:17:47.348 }' 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:47.348 BaseBdev2' 00:17:47.348 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.608 01:37:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.608 [2024-11-17 01:37:55.922676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.608 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.868 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.868 "name": "Existed_Raid", 00:17:47.868 "uuid": "70c7e3e8-ca85-4257-88bf-1a44cded44ef", 00:17:47.868 "strip_size_kb": 0, 00:17:47.868 "state": "online", 00:17:47.868 "raid_level": "raid1", 00:17:47.868 "superblock": true, 00:17:47.868 "num_base_bdevs": 2, 00:17:47.868 "num_base_bdevs_discovered": 1, 00:17:47.868 "num_base_bdevs_operational": 1, 00:17:47.868 "base_bdevs_list": [ 00:17:47.868 { 00:17:47.868 "name": null, 00:17:47.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.868 "is_configured": false, 00:17:47.868 "data_offset": 0, 00:17:47.868 "data_size": 7936 00:17:47.868 }, 00:17:47.868 { 00:17:47.868 "name": "BaseBdev2", 00:17:47.868 "uuid": "903663d2-d8b5-4f4d-b6bf-3b6c1f856f4d", 00:17:47.868 "is_configured": true, 00:17:47.868 "data_offset": 256, 00:17:47.868 "data_size": 7936 00:17:47.868 } 00:17:47.868 ] 00:17:47.868 }' 00:17:47.868 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.868 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.128 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.128 [2024-11-17 01:37:56.508891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.128 [2024-11-17 01:37:56.509057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.388 [2024-11-17 01:37:56.597728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.388 [2024-11-17 01:37:56.597871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.388 [2024-11-17 01:37:56.597921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85665 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85665 ']' 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85665 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85665 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85665' 00:17:48.388 killing process with pid 85665 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85665 00:17:48.388 [2024-11-17 01:37:56.700523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.388 01:37:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85665 00:17:48.388 [2024-11-17 01:37:56.715723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.328 ************************************ 00:17:49.328 END TEST raid_state_function_test_sb_4k 00:17:49.328 01:37:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:49.328 00:17:49.328 real 0m4.896s 00:17:49.328 user 0m7.044s 00:17:49.328 sys 0m0.898s 00:17:49.328 01:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.328 01:37:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.328 ************************************ 00:17:49.588 01:37:57 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:49.588 01:37:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:49.588 01:37:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.588 01:37:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.588 ************************************ 00:17:49.588 START TEST raid_superblock_test_4k 00:17:49.588 ************************************ 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85913 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85913 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85913 ']' 00:17:49.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.588 01:37:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.588 [2024-11-17 01:37:57.929851] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:49.588 [2024-11-17 01:37:57.929971] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85913 ] 00:17:49.848 [2024-11-17 01:37:58.102327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.848 [2024-11-17 01:37:58.207098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.107 [2024-11-17 01:37:58.398478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.107 [2024-11-17 01:37:58.398567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.367 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.367 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:50.367 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:50.367 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.367 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:50.367 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:50.367 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.368 malloc1 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.368 [2024-11-17 01:37:58.796152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.368 [2024-11-17 01:37:58.796330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.368 [2024-11-17 01:37:58.796372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:50.368 [2024-11-17 01:37:58.796403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.368 [2024-11-17 01:37:58.798373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.368 [2024-11-17 01:37:58.798444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.368 pt1 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.368 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.628 malloc2 00:17:50.628 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.629 [2024-11-17 01:37:58.851050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.629 [2024-11-17 01:37:58.851158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.629 [2024-11-17 01:37:58.851200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:50.629 [2024-11-17 01:37:58.851226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.629 [2024-11-17 01:37:58.853191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.629 [2024-11-17 01:37:58.853259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.629 pt2 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.629 [2024-11-17 01:37:58.863088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.629 [2024-11-17 01:37:58.864847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.629 [2024-11-17 01:37:58.865053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:50.629 [2024-11-17 01:37:58.865102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.629 [2024-11-17 01:37:58.865330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:50.629 [2024-11-17 01:37:58.865510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:50.629 [2024-11-17 01:37:58.865557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:50.629 [2024-11-17 01:37:58.865739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.629 "name": "raid_bdev1", 00:17:50.629 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:50.629 "strip_size_kb": 0, 00:17:50.629 "state": "online", 00:17:50.629 "raid_level": "raid1", 00:17:50.629 "superblock": true, 00:17:50.629 "num_base_bdevs": 2, 00:17:50.629 "num_base_bdevs_discovered": 2, 00:17:50.629 "num_base_bdevs_operational": 2, 00:17:50.629 "base_bdevs_list": [ 00:17:50.629 { 00:17:50.629 "name": "pt1", 00:17:50.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.629 "is_configured": true, 00:17:50.629 "data_offset": 256, 00:17:50.629 "data_size": 7936 00:17:50.629 }, 00:17:50.629 { 00:17:50.629 "name": "pt2", 00:17:50.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.629 "is_configured": true, 00:17:50.629 "data_offset": 256, 00:17:50.629 "data_size": 7936 00:17:50.629 } 00:17:50.629 ] 00:17:50.629 }' 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.629 01:37:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.897 [2024-11-17 01:37:59.306522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.897 "name": "raid_bdev1", 00:17:50.897 "aliases": [ 00:17:50.897 "c42ecf80-873f-43b5-b56e-c90a026d7f12" 00:17:50.897 ], 00:17:50.897 "product_name": "Raid Volume", 00:17:50.897 "block_size": 4096, 00:17:50.897 "num_blocks": 7936, 00:17:50.897 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:50.897 "assigned_rate_limits": { 00:17:50.897 "rw_ios_per_sec": 0, 00:17:50.897 "rw_mbytes_per_sec": 0, 00:17:50.897 "r_mbytes_per_sec": 0, 00:17:50.897 "w_mbytes_per_sec": 0 00:17:50.897 }, 00:17:50.897 "claimed": false, 00:17:50.897 "zoned": false, 00:17:50.897 "supported_io_types": { 00:17:50.897 "read": true, 00:17:50.897 "write": true, 00:17:50.897 "unmap": false, 00:17:50.897 "flush": false, 00:17:50.897 "reset": true, 00:17:50.897 "nvme_admin": false, 00:17:50.897 "nvme_io": false, 00:17:50.897 "nvme_io_md": false, 00:17:50.897 "write_zeroes": true, 00:17:50.897 "zcopy": false, 00:17:50.897 "get_zone_info": false, 00:17:50.897 "zone_management": false, 00:17:50.897 "zone_append": false, 00:17:50.897 "compare": false, 00:17:50.897 "compare_and_write": false, 00:17:50.897 "abort": false, 00:17:50.897 "seek_hole": false, 00:17:50.897 "seek_data": false, 00:17:50.897 "copy": false, 00:17:50.897 "nvme_iov_md": false 00:17:50.897 }, 00:17:50.897 "memory_domains": [ 00:17:50.897 { 00:17:50.897 "dma_device_id": "system", 00:17:50.897 "dma_device_type": 1 00:17:50.897 }, 00:17:50.897 { 00:17:50.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.897 "dma_device_type": 2 00:17:50.897 }, 00:17:50.897 { 00:17:50.897 "dma_device_id": "system", 00:17:50.897 "dma_device_type": 1 00:17:50.897 }, 00:17:50.897 { 00:17:50.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.897 "dma_device_type": 2 00:17:50.897 } 00:17:50.897 ], 00:17:50.897 "driver_specific": { 00:17:50.897 "raid": { 00:17:50.897 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:50.897 "strip_size_kb": 0, 00:17:50.897 "state": "online", 00:17:50.897 "raid_level": "raid1", 00:17:50.897 "superblock": true, 00:17:50.897 "num_base_bdevs": 2, 00:17:50.897 "num_base_bdevs_discovered": 2, 00:17:50.897 "num_base_bdevs_operational": 2, 00:17:50.897 "base_bdevs_list": [ 00:17:50.897 { 00:17:50.897 "name": "pt1", 00:17:50.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.897 "is_configured": true, 00:17:50.897 "data_offset": 256, 00:17:50.897 "data_size": 7936 00:17:50.897 }, 00:17:50.897 { 00:17:50.897 "name": "pt2", 00:17:50.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.897 "is_configured": true, 00:17:50.897 "data_offset": 256, 00:17:50.897 "data_size": 7936 00:17:50.897 } 00:17:50.897 ] 00:17:50.897 } 00:17:50.897 } 00:17:50.897 }' 00:17:50.897 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:51.161 pt2' 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.161 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:51.162 [2024-11-17 01:37:59.514155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c42ecf80-873f-43b5-b56e-c90a026d7f12 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z c42ecf80-873f-43b5-b56e-c90a026d7f12 ']' 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.162 [2024-11-17 01:37:59.561834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.162 [2024-11-17 01:37:59.561901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.162 [2024-11-17 01:37:59.561992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.162 [2024-11-17 01:37:59.562067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.162 [2024-11-17 01:37:59.562101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.162 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.422 [2024-11-17 01:37:59.681685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:51.422 [2024-11-17 01:37:59.683464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:51.422 [2024-11-17 01:37:59.683577] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:51.422 [2024-11-17 01:37:59.683655] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:51.422 [2024-11-17 01:37:59.683710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.422 [2024-11-17 01:37:59.683740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:51.422 request: 00:17:51.422 { 00:17:51.422 "name": "raid_bdev1", 00:17:51.422 "raid_level": "raid1", 00:17:51.422 "base_bdevs": [ 00:17:51.422 "malloc1", 00:17:51.422 "malloc2" 00:17:51.422 ], 00:17:51.422 "superblock": false, 00:17:51.422 "method": "bdev_raid_create", 00:17:51.422 "req_id": 1 00:17:51.422 } 00:17:51.422 Got JSON-RPC error response 00:17:51.422 response: 00:17:51.422 { 00:17:51.422 "code": -17, 00:17:51.422 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:51.422 } 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.422 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.422 [2024-11-17 01:37:59.733581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.422 [2024-11-17 01:37:59.733673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.423 [2024-11-17 01:37:59.733702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:51.423 [2024-11-17 01:37:59.733729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.423 [2024-11-17 01:37:59.735713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.423 [2024-11-17 01:37:59.735807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.423 [2024-11-17 01:37:59.735893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:51.423 [2024-11-17 01:37:59.735980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.423 pt1 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.423 "name": "raid_bdev1", 00:17:51.423 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:51.423 "strip_size_kb": 0, 00:17:51.423 "state": "configuring", 00:17:51.423 "raid_level": "raid1", 00:17:51.423 "superblock": true, 00:17:51.423 "num_base_bdevs": 2, 00:17:51.423 "num_base_bdevs_discovered": 1, 00:17:51.423 "num_base_bdevs_operational": 2, 00:17:51.423 "base_bdevs_list": [ 00:17:51.423 { 00:17:51.423 "name": "pt1", 00:17:51.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.423 "is_configured": true, 00:17:51.423 "data_offset": 256, 00:17:51.423 "data_size": 7936 00:17:51.423 }, 00:17:51.423 { 00:17:51.423 "name": null, 00:17:51.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.423 "is_configured": false, 00:17:51.423 "data_offset": 256, 00:17:51.423 "data_size": 7936 00:17:51.423 } 00:17:51.423 ] 00:17:51.423 }' 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.423 01:37:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.993 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:51.993 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:51.993 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:51.993 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.993 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.993 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.993 [2024-11-17 01:38:00.196786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.993 [2024-11-17 01:38:00.196896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.993 [2024-11-17 01:38:00.196932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:51.993 [2024-11-17 01:38:00.196961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.994 [2024-11-17 01:38:00.197320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.994 [2024-11-17 01:38:00.197378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.994 [2024-11-17 01:38:00.197456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:51.994 [2024-11-17 01:38:00.197503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.994 [2024-11-17 01:38:00.197622] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:51.994 [2024-11-17 01:38:00.197662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:51.994 [2024-11-17 01:38:00.197901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:51.994 [2024-11-17 01:38:00.198071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:51.994 [2024-11-17 01:38:00.198111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:51.994 [2024-11-17 01:38:00.198255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.994 pt2 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.994 "name": "raid_bdev1", 00:17:51.994 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:51.994 "strip_size_kb": 0, 00:17:51.994 "state": "online", 00:17:51.994 "raid_level": "raid1", 00:17:51.994 "superblock": true, 00:17:51.994 "num_base_bdevs": 2, 00:17:51.994 "num_base_bdevs_discovered": 2, 00:17:51.994 "num_base_bdevs_operational": 2, 00:17:51.994 "base_bdevs_list": [ 00:17:51.994 { 00:17:51.994 "name": "pt1", 00:17:51.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.994 "is_configured": true, 00:17:51.994 "data_offset": 256, 00:17:51.994 "data_size": 7936 00:17:51.994 }, 00:17:51.994 { 00:17:51.994 "name": "pt2", 00:17:51.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.994 "is_configured": true, 00:17:51.994 "data_offset": 256, 00:17:51.994 "data_size": 7936 00:17:51.994 } 00:17:51.994 ] 00:17:51.994 }' 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.994 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.254 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:52.254 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:52.254 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.254 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.254 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.254 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.255 [2024-11-17 01:38:00.616255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.255 "name": "raid_bdev1", 00:17:52.255 "aliases": [ 00:17:52.255 "c42ecf80-873f-43b5-b56e-c90a026d7f12" 00:17:52.255 ], 00:17:52.255 "product_name": "Raid Volume", 00:17:52.255 "block_size": 4096, 00:17:52.255 "num_blocks": 7936, 00:17:52.255 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:52.255 "assigned_rate_limits": { 00:17:52.255 "rw_ios_per_sec": 0, 00:17:52.255 "rw_mbytes_per_sec": 0, 00:17:52.255 "r_mbytes_per_sec": 0, 00:17:52.255 "w_mbytes_per_sec": 0 00:17:52.255 }, 00:17:52.255 "claimed": false, 00:17:52.255 "zoned": false, 00:17:52.255 "supported_io_types": { 00:17:52.255 "read": true, 00:17:52.255 "write": true, 00:17:52.255 "unmap": false, 00:17:52.255 "flush": false, 00:17:52.255 "reset": true, 00:17:52.255 "nvme_admin": false, 00:17:52.255 "nvme_io": false, 00:17:52.255 "nvme_io_md": false, 00:17:52.255 "write_zeroes": true, 00:17:52.255 "zcopy": false, 00:17:52.255 "get_zone_info": false, 00:17:52.255 "zone_management": false, 00:17:52.255 "zone_append": false, 00:17:52.255 "compare": false, 00:17:52.255 "compare_and_write": false, 00:17:52.255 "abort": false, 00:17:52.255 "seek_hole": false, 00:17:52.255 "seek_data": false, 00:17:52.255 "copy": false, 00:17:52.255 "nvme_iov_md": false 00:17:52.255 }, 00:17:52.255 "memory_domains": [ 00:17:52.255 { 00:17:52.255 "dma_device_id": "system", 00:17:52.255 "dma_device_type": 1 00:17:52.255 }, 00:17:52.255 { 00:17:52.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.255 "dma_device_type": 2 00:17:52.255 }, 00:17:52.255 { 00:17:52.255 "dma_device_id": "system", 00:17:52.255 "dma_device_type": 1 00:17:52.255 }, 00:17:52.255 { 00:17:52.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.255 "dma_device_type": 2 00:17:52.255 } 00:17:52.255 ], 00:17:52.255 "driver_specific": { 00:17:52.255 "raid": { 00:17:52.255 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:52.255 "strip_size_kb": 0, 00:17:52.255 "state": "online", 00:17:52.255 "raid_level": "raid1", 00:17:52.255 "superblock": true, 00:17:52.255 "num_base_bdevs": 2, 00:17:52.255 "num_base_bdevs_discovered": 2, 00:17:52.255 "num_base_bdevs_operational": 2, 00:17:52.255 "base_bdevs_list": [ 00:17:52.255 { 00:17:52.255 "name": "pt1", 00:17:52.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.255 "is_configured": true, 00:17:52.255 "data_offset": 256, 00:17:52.255 "data_size": 7936 00:17:52.255 }, 00:17:52.255 { 00:17:52.255 "name": "pt2", 00:17:52.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.255 "is_configured": true, 00:17:52.255 "data_offset": 256, 00:17:52.255 "data_size": 7936 00:17:52.255 } 00:17:52.255 ] 00:17:52.255 } 00:17:52.255 } 00:17:52.255 }' 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:52.255 pt2' 00:17:52.255 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:52.516 [2024-11-17 01:38:00.843884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' c42ecf80-873f-43b5-b56e-c90a026d7f12 '!=' c42ecf80-873f-43b5-b56e-c90a026d7f12 ']' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.516 [2024-11-17 01:38:00.895604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.516 "name": "raid_bdev1", 00:17:52.516 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:52.516 "strip_size_kb": 0, 00:17:52.516 "state": "online", 00:17:52.516 "raid_level": "raid1", 00:17:52.516 "superblock": true, 00:17:52.516 "num_base_bdevs": 2, 00:17:52.516 "num_base_bdevs_discovered": 1, 00:17:52.516 "num_base_bdevs_operational": 1, 00:17:52.516 "base_bdevs_list": [ 00:17:52.516 { 00:17:52.516 "name": null, 00:17:52.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.516 "is_configured": false, 00:17:52.516 "data_offset": 0, 00:17:52.516 "data_size": 7936 00:17:52.516 }, 00:17:52.516 { 00:17:52.516 "name": "pt2", 00:17:52.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.516 "is_configured": true, 00:17:52.516 "data_offset": 256, 00:17:52.516 "data_size": 7936 00:17:52.516 } 00:17:52.516 ] 00:17:52.516 }' 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.516 01:38:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 [2024-11-17 01:38:01.354867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.087 [2024-11-17 01:38:01.354938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.087 [2024-11-17 01:38:01.355002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.087 [2024-11-17 01:38:01.355049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.087 [2024-11-17 01:38:01.355097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 [2024-11-17 01:38:01.430761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.087 [2024-11-17 01:38:01.430884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.087 [2024-11-17 01:38:01.430918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:53.087 [2024-11-17 01:38:01.430950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.087 [2024-11-17 01:38:01.432926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.087 [2024-11-17 01:38:01.433015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.087 [2024-11-17 01:38:01.433095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.087 [2024-11-17 01:38:01.433153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.087 [2024-11-17 01:38:01.433281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:53.087 [2024-11-17 01:38:01.433297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.087 [2024-11-17 01:38:01.433512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:53.087 [2024-11-17 01:38:01.433655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:53.087 [2024-11-17 01:38:01.433664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:53.087 [2024-11-17 01:38:01.433802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.087 pt2 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.087 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.088 "name": "raid_bdev1", 00:17:53.088 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:53.088 "strip_size_kb": 0, 00:17:53.088 "state": "online", 00:17:53.088 "raid_level": "raid1", 00:17:53.088 "superblock": true, 00:17:53.088 "num_base_bdevs": 2, 00:17:53.088 "num_base_bdevs_discovered": 1, 00:17:53.088 "num_base_bdevs_operational": 1, 00:17:53.088 "base_bdevs_list": [ 00:17:53.088 { 00:17:53.088 "name": null, 00:17:53.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.088 "is_configured": false, 00:17:53.088 "data_offset": 256, 00:17:53.088 "data_size": 7936 00:17:53.088 }, 00:17:53.088 { 00:17:53.088 "name": "pt2", 00:17:53.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.088 "is_configured": true, 00:17:53.088 "data_offset": 256, 00:17:53.088 "data_size": 7936 00:17:53.088 } 00:17:53.088 ] 00:17:53.088 }' 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.088 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 [2024-11-17 01:38:01.869972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.658 [2024-11-17 01:38:01.870045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.658 [2024-11-17 01:38:01.870124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.658 [2024-11-17 01:38:01.870176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.658 [2024-11-17 01:38:01.870207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 [2024-11-17 01:38:01.917910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.658 [2024-11-17 01:38:01.918013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.658 [2024-11-17 01:38:01.918044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:53.658 [2024-11-17 01:38:01.918070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.658 [2024-11-17 01:38:01.920132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.658 [2024-11-17 01:38:01.920198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.658 [2024-11-17 01:38:01.920301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:53.658 [2024-11-17 01:38:01.920369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.658 [2024-11-17 01:38:01.920517] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:53.658 [2024-11-17 01:38:01.920568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.658 [2024-11-17 01:38:01.920606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:53.658 [2024-11-17 01:38:01.920718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.658 [2024-11-17 01:38:01.920831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:53.658 [2024-11-17 01:38:01.920870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.658 [2024-11-17 01:38:01.921099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:53.658 [2024-11-17 01:38:01.921268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:53.658 [2024-11-17 01:38:01.921311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:53.658 [2024-11-17 01:38:01.921481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.658 pt1 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.658 "name": "raid_bdev1", 00:17:53.658 "uuid": "c42ecf80-873f-43b5-b56e-c90a026d7f12", 00:17:53.658 "strip_size_kb": 0, 00:17:53.658 "state": "online", 00:17:53.658 "raid_level": "raid1", 00:17:53.658 "superblock": true, 00:17:53.658 "num_base_bdevs": 2, 00:17:53.658 "num_base_bdevs_discovered": 1, 00:17:53.658 "num_base_bdevs_operational": 1, 00:17:53.658 "base_bdevs_list": [ 00:17:53.658 { 00:17:53.658 "name": null, 00:17:53.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.658 "is_configured": false, 00:17:53.658 "data_offset": 256, 00:17:53.658 "data_size": 7936 00:17:53.658 }, 00:17:53.658 { 00:17:53.658 "name": "pt2", 00:17:53.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.658 "is_configured": true, 00:17:53.658 "data_offset": 256, 00:17:53.658 "data_size": 7936 00:17:53.658 } 00:17:53.658 ] 00:17:53.658 }' 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.658 01:38:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.918 01:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:53.918 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.918 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.918 01:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.179 [2024-11-17 01:38:02.425222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' c42ecf80-873f-43b5-b56e-c90a026d7f12 '!=' c42ecf80-873f-43b5-b56e-c90a026d7f12 ']' 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85913 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85913 ']' 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85913 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85913 00:17:54.179 killing process with pid 85913 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85913' 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85913 00:17:54.179 [2024-11-17 01:38:02.506274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.179 [2024-11-17 01:38:02.506333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.179 [2024-11-17 01:38:02.506363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.179 [2024-11-17 01:38:02.506374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:54.179 01:38:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85913 00:17:54.439 [2024-11-17 01:38:02.700441] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.380 01:38:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:55.380 00:17:55.380 real 0m5.893s 00:17:55.380 user 0m8.891s 00:17:55.380 sys 0m1.138s 00:17:55.380 01:38:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.380 ************************************ 00:17:55.380 END TEST raid_superblock_test_4k 00:17:55.380 ************************************ 00:17:55.380 01:38:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.380 01:38:03 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:55.380 01:38:03 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:55.380 01:38:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:55.380 01:38:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.380 01:38:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.380 ************************************ 00:17:55.380 START TEST raid_rebuild_test_sb_4k 00:17:55.380 ************************************ 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86236 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86236 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86236 ']' 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.380 01:38:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.640 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:55.640 Zero copy mechanism will not be used. 00:17:55.640 [2024-11-17 01:38:03.920350] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:55.640 [2024-11-17 01:38:03.920479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86236 ] 00:17:55.640 [2024-11-17 01:38:04.093644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.899 [2024-11-17 01:38:04.202013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.159 [2024-11-17 01:38:04.404317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.159 [2024-11-17 01:38:04.404373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.419 BaseBdev1_malloc 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.419 [2024-11-17 01:38:04.764851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.419 [2024-11-17 01:38:04.765015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.419 [2024-11-17 01:38:04.765055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.419 [2024-11-17 01:38:04.765085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.419 [2024-11-17 01:38:04.767072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.419 [2024-11-17 01:38:04.767144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.419 BaseBdev1 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.419 BaseBdev2_malloc 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.419 [2024-11-17 01:38:04.818159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:56.419 [2024-11-17 01:38:04.818269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.419 [2024-11-17 01:38:04.818290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.419 [2024-11-17 01:38:04.818303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.419 [2024-11-17 01:38:04.820230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.419 [2024-11-17 01:38:04.820268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:56.419 BaseBdev2 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:56.419 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.420 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.689 spare_malloc 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.689 spare_delay 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.689 [2024-11-17 01:38:04.918014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:56.689 [2024-11-17 01:38:04.918070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.689 [2024-11-17 01:38:04.918088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:56.689 [2024-11-17 01:38:04.918098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.689 [2024-11-17 01:38:04.920033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.689 [2024-11-17 01:38:04.920072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:56.689 spare 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.689 [2024-11-17 01:38:04.930048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.689 [2024-11-17 01:38:04.931827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.689 [2024-11-17 01:38:04.932049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.689 [2024-11-17 01:38:04.932099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.689 [2024-11-17 01:38:04.932339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:56.689 [2024-11-17 01:38:04.932538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.689 [2024-11-17 01:38:04.932577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.689 [2024-11-17 01:38:04.932751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.689 "name": "raid_bdev1", 00:17:56.689 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:17:56.689 "strip_size_kb": 0, 00:17:56.689 "state": "online", 00:17:56.689 "raid_level": "raid1", 00:17:56.689 "superblock": true, 00:17:56.689 "num_base_bdevs": 2, 00:17:56.689 "num_base_bdevs_discovered": 2, 00:17:56.689 "num_base_bdevs_operational": 2, 00:17:56.689 "base_bdevs_list": [ 00:17:56.689 { 00:17:56.689 "name": "BaseBdev1", 00:17:56.689 "uuid": "78ef25ad-8537-5158-a75a-0f0769cd4013", 00:17:56.689 "is_configured": true, 00:17:56.689 "data_offset": 256, 00:17:56.689 "data_size": 7936 00:17:56.689 }, 00:17:56.689 { 00:17:56.689 "name": "BaseBdev2", 00:17:56.689 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:17:56.689 "is_configured": true, 00:17:56.689 "data_offset": 256, 00:17:56.689 "data_size": 7936 00:17:56.689 } 00:17:56.689 ] 00:17:56.689 }' 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.689 01:38:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.960 [2024-11-17 01:38:05.353667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:56.960 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:57.219 [2024-11-17 01:38:05.601061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:57.219 /dev/nbd0 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.219 1+0 records in 00:17:57.219 1+0 records out 00:17:57.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416342 s, 9.8 MB/s 00:17:57.219 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:57.478 01:38:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:58.047 7936+0 records in 00:17:58.047 7936+0 records out 00:17:58.047 32505856 bytes (33 MB, 31 MiB) copied, 0.603154 s, 53.9 MB/s 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:58.047 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:58.048 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:58.048 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.048 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.048 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:58.308 [2024-11-17 01:38:06.506856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.308 [2024-11-17 01:38:06.518923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.308 "name": "raid_bdev1", 00:17:58.308 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:17:58.308 "strip_size_kb": 0, 00:17:58.308 "state": "online", 00:17:58.308 "raid_level": "raid1", 00:17:58.308 "superblock": true, 00:17:58.308 "num_base_bdevs": 2, 00:17:58.308 "num_base_bdevs_discovered": 1, 00:17:58.308 "num_base_bdevs_operational": 1, 00:17:58.308 "base_bdevs_list": [ 00:17:58.308 { 00:17:58.308 "name": null, 00:17:58.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.308 "is_configured": false, 00:17:58.308 "data_offset": 0, 00:17:58.308 "data_size": 7936 00:17:58.308 }, 00:17:58.308 { 00:17:58.308 "name": "BaseBdev2", 00:17:58.308 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:17:58.308 "is_configured": true, 00:17:58.308 "data_offset": 256, 00:17:58.308 "data_size": 7936 00:17:58.308 } 00:17:58.308 ] 00:17:58.308 }' 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.308 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.568 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.568 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.568 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.568 [2024-11-17 01:38:06.938191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.568 [2024-11-17 01:38:06.953830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:58.568 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.568 01:38:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:58.568 [2024-11-17 01:38:06.955591] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.507 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.507 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.507 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.507 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.507 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.767 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.767 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.767 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.767 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.767 01:38:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.767 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.767 "name": "raid_bdev1", 00:17:59.767 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:17:59.767 "strip_size_kb": 0, 00:17:59.767 "state": "online", 00:17:59.767 "raid_level": "raid1", 00:17:59.767 "superblock": true, 00:17:59.767 "num_base_bdevs": 2, 00:17:59.767 "num_base_bdevs_discovered": 2, 00:17:59.767 "num_base_bdevs_operational": 2, 00:17:59.767 "process": { 00:17:59.767 "type": "rebuild", 00:17:59.767 "target": "spare", 00:17:59.767 "progress": { 00:17:59.767 "blocks": 2560, 00:17:59.767 "percent": 32 00:17:59.767 } 00:17:59.767 }, 00:17:59.767 "base_bdevs_list": [ 00:17:59.767 { 00:17:59.767 "name": "spare", 00:17:59.767 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:17:59.767 "is_configured": true, 00:17:59.767 "data_offset": 256, 00:17:59.767 "data_size": 7936 00:17:59.767 }, 00:17:59.767 { 00:17:59.767 "name": "BaseBdev2", 00:17:59.767 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:17:59.768 "is_configured": true, 00:17:59.768 "data_offset": 256, 00:17:59.768 "data_size": 7936 00:17:59.768 } 00:17:59.768 ] 00:17:59.768 }' 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.768 [2024-11-17 01:38:08.127196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.768 [2024-11-17 01:38:08.160128] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:59.768 [2024-11-17 01:38:08.160235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.768 [2024-11-17 01:38:08.160270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.768 [2024-11-17 01:38:08.160292] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.768 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.027 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.027 "name": "raid_bdev1", 00:18:00.027 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:00.027 "strip_size_kb": 0, 00:18:00.027 "state": "online", 00:18:00.027 "raid_level": "raid1", 00:18:00.027 "superblock": true, 00:18:00.027 "num_base_bdevs": 2, 00:18:00.027 "num_base_bdevs_discovered": 1, 00:18:00.027 "num_base_bdevs_operational": 1, 00:18:00.027 "base_bdevs_list": [ 00:18:00.027 { 00:18:00.027 "name": null, 00:18:00.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.027 "is_configured": false, 00:18:00.027 "data_offset": 0, 00:18:00.027 "data_size": 7936 00:18:00.027 }, 00:18:00.027 { 00:18:00.027 "name": "BaseBdev2", 00:18:00.027 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:00.027 "is_configured": true, 00:18:00.027 "data_offset": 256, 00:18:00.027 "data_size": 7936 00:18:00.027 } 00:18:00.027 ] 00:18:00.027 }' 00:18:00.027 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.027 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.287 "name": "raid_bdev1", 00:18:00.287 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:00.287 "strip_size_kb": 0, 00:18:00.287 "state": "online", 00:18:00.287 "raid_level": "raid1", 00:18:00.287 "superblock": true, 00:18:00.287 "num_base_bdevs": 2, 00:18:00.287 "num_base_bdevs_discovered": 1, 00:18:00.287 "num_base_bdevs_operational": 1, 00:18:00.287 "base_bdevs_list": [ 00:18:00.287 { 00:18:00.287 "name": null, 00:18:00.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.287 "is_configured": false, 00:18:00.287 "data_offset": 0, 00:18:00.287 "data_size": 7936 00:18:00.287 }, 00:18:00.287 { 00:18:00.287 "name": "BaseBdev2", 00:18:00.287 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:00.287 "is_configured": true, 00:18:00.287 "data_offset": 256, 00:18:00.287 "data_size": 7936 00:18:00.287 } 00:18:00.287 ] 00:18:00.287 }' 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.287 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.287 [2024-11-17 01:38:08.739899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.546 [2024-11-17 01:38:08.754925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:00.546 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.546 01:38:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:00.546 [2024-11-17 01:38:08.756619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.488 "name": "raid_bdev1", 00:18:01.488 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:01.488 "strip_size_kb": 0, 00:18:01.488 "state": "online", 00:18:01.488 "raid_level": "raid1", 00:18:01.488 "superblock": true, 00:18:01.488 "num_base_bdevs": 2, 00:18:01.488 "num_base_bdevs_discovered": 2, 00:18:01.488 "num_base_bdevs_operational": 2, 00:18:01.488 "process": { 00:18:01.488 "type": "rebuild", 00:18:01.488 "target": "spare", 00:18:01.488 "progress": { 00:18:01.488 "blocks": 2560, 00:18:01.488 "percent": 32 00:18:01.488 } 00:18:01.488 }, 00:18:01.488 "base_bdevs_list": [ 00:18:01.488 { 00:18:01.488 "name": "spare", 00:18:01.488 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:01.488 "is_configured": true, 00:18:01.488 "data_offset": 256, 00:18:01.488 "data_size": 7936 00:18:01.488 }, 00:18:01.488 { 00:18:01.488 "name": "BaseBdev2", 00:18:01.488 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:01.488 "is_configured": true, 00:18:01.488 "data_offset": 256, 00:18:01.488 "data_size": 7936 00:18:01.488 } 00:18:01.488 ] 00:18:01.488 }' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:01.488 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=663 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.488 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.748 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.748 "name": "raid_bdev1", 00:18:01.748 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:01.748 "strip_size_kb": 0, 00:18:01.748 "state": "online", 00:18:01.748 "raid_level": "raid1", 00:18:01.748 "superblock": true, 00:18:01.748 "num_base_bdevs": 2, 00:18:01.748 "num_base_bdevs_discovered": 2, 00:18:01.748 "num_base_bdevs_operational": 2, 00:18:01.748 "process": { 00:18:01.748 "type": "rebuild", 00:18:01.748 "target": "spare", 00:18:01.748 "progress": { 00:18:01.748 "blocks": 2816, 00:18:01.748 "percent": 35 00:18:01.748 } 00:18:01.748 }, 00:18:01.748 "base_bdevs_list": [ 00:18:01.748 { 00:18:01.748 "name": "spare", 00:18:01.748 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:01.748 "is_configured": true, 00:18:01.748 "data_offset": 256, 00:18:01.748 "data_size": 7936 00:18:01.748 }, 00:18:01.748 { 00:18:01.748 "name": "BaseBdev2", 00:18:01.748 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:01.748 "is_configured": true, 00:18:01.748 "data_offset": 256, 00:18:01.748 "data_size": 7936 00:18:01.748 } 00:18:01.748 ] 00:18:01.748 }' 00:18:01.748 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.748 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.748 01:38:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.748 01:38:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.748 01:38:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.688 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.688 "name": "raid_bdev1", 00:18:02.688 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:02.688 "strip_size_kb": 0, 00:18:02.688 "state": "online", 00:18:02.688 "raid_level": "raid1", 00:18:02.688 "superblock": true, 00:18:02.688 "num_base_bdevs": 2, 00:18:02.688 "num_base_bdevs_discovered": 2, 00:18:02.688 "num_base_bdevs_operational": 2, 00:18:02.688 "process": { 00:18:02.688 "type": "rebuild", 00:18:02.688 "target": "spare", 00:18:02.688 "progress": { 00:18:02.688 "blocks": 5632, 00:18:02.688 "percent": 70 00:18:02.688 } 00:18:02.688 }, 00:18:02.688 "base_bdevs_list": [ 00:18:02.688 { 00:18:02.688 "name": "spare", 00:18:02.688 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:02.689 "is_configured": true, 00:18:02.689 "data_offset": 256, 00:18:02.689 "data_size": 7936 00:18:02.689 }, 00:18:02.689 { 00:18:02.689 "name": "BaseBdev2", 00:18:02.689 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:02.689 "is_configured": true, 00:18:02.689 "data_offset": 256, 00:18:02.689 "data_size": 7936 00:18:02.689 } 00:18:02.689 ] 00:18:02.689 }' 00:18:02.689 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.689 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.689 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.948 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.948 01:38:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.519 [2024-11-17 01:38:11.867647] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:03.519 [2024-11-17 01:38:11.867873] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:03.519 [2024-11-17 01:38:11.867999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.779 "name": "raid_bdev1", 00:18:03.779 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:03.779 "strip_size_kb": 0, 00:18:03.779 "state": "online", 00:18:03.779 "raid_level": "raid1", 00:18:03.779 "superblock": true, 00:18:03.779 "num_base_bdevs": 2, 00:18:03.779 "num_base_bdevs_discovered": 2, 00:18:03.779 "num_base_bdevs_operational": 2, 00:18:03.779 "base_bdevs_list": [ 00:18:03.779 { 00:18:03.779 "name": "spare", 00:18:03.779 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:03.779 "is_configured": true, 00:18:03.779 "data_offset": 256, 00:18:03.779 "data_size": 7936 00:18:03.779 }, 00:18:03.779 { 00:18:03.779 "name": "BaseBdev2", 00:18:03.779 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:03.779 "is_configured": true, 00:18:03.779 "data_offset": 256, 00:18:03.779 "data_size": 7936 00:18:03.779 } 00:18:03.779 ] 00:18:03.779 }' 00:18:03.779 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.039 "name": "raid_bdev1", 00:18:04.039 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:04.039 "strip_size_kb": 0, 00:18:04.039 "state": "online", 00:18:04.039 "raid_level": "raid1", 00:18:04.039 "superblock": true, 00:18:04.039 "num_base_bdevs": 2, 00:18:04.039 "num_base_bdevs_discovered": 2, 00:18:04.039 "num_base_bdevs_operational": 2, 00:18:04.039 "base_bdevs_list": [ 00:18:04.039 { 00:18:04.039 "name": "spare", 00:18:04.039 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:04.039 "is_configured": true, 00:18:04.039 "data_offset": 256, 00:18:04.039 "data_size": 7936 00:18:04.039 }, 00:18:04.039 { 00:18:04.039 "name": "BaseBdev2", 00:18:04.039 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:04.039 "is_configured": true, 00:18:04.039 "data_offset": 256, 00:18:04.039 "data_size": 7936 00:18:04.039 } 00:18:04.039 ] 00:18:04.039 }' 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.039 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.040 "name": "raid_bdev1", 00:18:04.040 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:04.040 "strip_size_kb": 0, 00:18:04.040 "state": "online", 00:18:04.040 "raid_level": "raid1", 00:18:04.040 "superblock": true, 00:18:04.040 "num_base_bdevs": 2, 00:18:04.040 "num_base_bdevs_discovered": 2, 00:18:04.040 "num_base_bdevs_operational": 2, 00:18:04.040 "base_bdevs_list": [ 00:18:04.040 { 00:18:04.040 "name": "spare", 00:18:04.040 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:04.040 "is_configured": true, 00:18:04.040 "data_offset": 256, 00:18:04.040 "data_size": 7936 00:18:04.040 }, 00:18:04.040 { 00:18:04.040 "name": "BaseBdev2", 00:18:04.040 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:04.040 "is_configured": true, 00:18:04.040 "data_offset": 256, 00:18:04.040 "data_size": 7936 00:18:04.040 } 00:18:04.040 ] 00:18:04.040 }' 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.040 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.610 [2024-11-17 01:38:12.867061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.610 [2024-11-17 01:38:12.867139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.610 [2024-11-17 01:38:12.867244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.610 [2024-11-17 01:38:12.867319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.610 [2024-11-17 01:38:12.867372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.610 01:38:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:04.870 /dev/nbd0 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.870 1+0 records in 00:18:04.870 1+0 records out 00:18:04.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441326 s, 9.3 MB/s 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.870 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:05.130 /dev/nbd1 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.130 1+0 records in 00:18:05.130 1+0 records out 00:18:05.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513818 s, 8.0 MB/s 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:18:05.130 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.131 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:05.131 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:18:05.131 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.131 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.131 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.391 01:38:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.651 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.651 [2024-11-17 01:38:14.064596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.651 [2024-11-17 01:38:14.064709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.651 [2024-11-17 01:38:14.064748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:05.651 [2024-11-17 01:38:14.064793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.651 [2024-11-17 01:38:14.066950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.651 [2024-11-17 01:38:14.067021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.651 [2024-11-17 01:38:14.067138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.651 [2024-11-17 01:38:14.067232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.652 [2024-11-17 01:38:14.067429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.652 spare 00:18:05.652 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.652 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:05.652 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.652 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.912 [2024-11-17 01:38:14.167369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:05.912 [2024-11-17 01:38:14.167437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:05.912 [2024-11-17 01:38:14.167713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:05.912 [2024-11-17 01:38:14.167917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:05.912 [2024-11-17 01:38:14.167960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:05.912 [2024-11-17 01:38:14.168159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.912 "name": "raid_bdev1", 00:18:05.912 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:05.912 "strip_size_kb": 0, 00:18:05.912 "state": "online", 00:18:05.912 "raid_level": "raid1", 00:18:05.912 "superblock": true, 00:18:05.912 "num_base_bdevs": 2, 00:18:05.912 "num_base_bdevs_discovered": 2, 00:18:05.912 "num_base_bdevs_operational": 2, 00:18:05.912 "base_bdevs_list": [ 00:18:05.912 { 00:18:05.912 "name": "spare", 00:18:05.912 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:05.912 "is_configured": true, 00:18:05.912 "data_offset": 256, 00:18:05.912 "data_size": 7936 00:18:05.912 }, 00:18:05.912 { 00:18:05.912 "name": "BaseBdev2", 00:18:05.912 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:05.912 "is_configured": true, 00:18:05.912 "data_offset": 256, 00:18:05.912 "data_size": 7936 00:18:05.912 } 00:18:05.912 ] 00:18:05.912 }' 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.912 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.482 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.482 "name": "raid_bdev1", 00:18:06.482 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:06.482 "strip_size_kb": 0, 00:18:06.482 "state": "online", 00:18:06.482 "raid_level": "raid1", 00:18:06.482 "superblock": true, 00:18:06.482 "num_base_bdevs": 2, 00:18:06.482 "num_base_bdevs_discovered": 2, 00:18:06.482 "num_base_bdevs_operational": 2, 00:18:06.483 "base_bdevs_list": [ 00:18:06.483 { 00:18:06.483 "name": "spare", 00:18:06.483 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:06.483 "is_configured": true, 00:18:06.483 "data_offset": 256, 00:18:06.483 "data_size": 7936 00:18:06.483 }, 00:18:06.483 { 00:18:06.483 "name": "BaseBdev2", 00:18:06.483 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:06.483 "is_configured": true, 00:18:06.483 "data_offset": 256, 00:18:06.483 "data_size": 7936 00:18:06.483 } 00:18:06.483 ] 00:18:06.483 }' 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.483 [2024-11-17 01:38:14.847302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.483 "name": "raid_bdev1", 00:18:06.483 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:06.483 "strip_size_kb": 0, 00:18:06.483 "state": "online", 00:18:06.483 "raid_level": "raid1", 00:18:06.483 "superblock": true, 00:18:06.483 "num_base_bdevs": 2, 00:18:06.483 "num_base_bdevs_discovered": 1, 00:18:06.483 "num_base_bdevs_operational": 1, 00:18:06.483 "base_bdevs_list": [ 00:18:06.483 { 00:18:06.483 "name": null, 00:18:06.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.483 "is_configured": false, 00:18:06.483 "data_offset": 0, 00:18:06.483 "data_size": 7936 00:18:06.483 }, 00:18:06.483 { 00:18:06.483 "name": "BaseBdev2", 00:18:06.483 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:06.483 "is_configured": true, 00:18:06.483 "data_offset": 256, 00:18:06.483 "data_size": 7936 00:18:06.483 } 00:18:06.483 ] 00:18:06.483 }' 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.483 01:38:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.053 01:38:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.053 01:38:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.053 01:38:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.053 [2024-11-17 01:38:15.330554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.053 [2024-11-17 01:38:15.330748] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.053 [2024-11-17 01:38:15.330818] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.053 [2024-11-17 01:38:15.330871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.053 [2024-11-17 01:38:15.345824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:07.053 01:38:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.053 01:38:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:07.053 [2024-11-17 01:38:15.347621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.994 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.994 "name": "raid_bdev1", 00:18:07.994 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:07.994 "strip_size_kb": 0, 00:18:07.994 "state": "online", 00:18:07.994 "raid_level": "raid1", 00:18:07.994 "superblock": true, 00:18:07.994 "num_base_bdevs": 2, 00:18:07.994 "num_base_bdevs_discovered": 2, 00:18:07.994 "num_base_bdevs_operational": 2, 00:18:07.994 "process": { 00:18:07.994 "type": "rebuild", 00:18:07.994 "target": "spare", 00:18:07.994 "progress": { 00:18:07.994 "blocks": 2560, 00:18:07.994 "percent": 32 00:18:07.994 } 00:18:07.994 }, 00:18:07.994 "base_bdevs_list": [ 00:18:07.994 { 00:18:07.994 "name": "spare", 00:18:07.994 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:07.994 "is_configured": true, 00:18:07.994 "data_offset": 256, 00:18:07.994 "data_size": 7936 00:18:07.994 }, 00:18:07.994 { 00:18:07.994 "name": "BaseBdev2", 00:18:07.994 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:07.994 "is_configured": true, 00:18:07.994 "data_offset": 256, 00:18:07.994 "data_size": 7936 00:18:07.994 } 00:18:07.994 ] 00:18:07.994 }' 00:18:07.995 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.995 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.995 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 [2024-11-17 01:38:16.487753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.255 [2024-11-17 01:38:16.552088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.255 [2024-11-17 01:38:16.552207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.255 [2024-11-17 01:38:16.552241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.255 [2024-11-17 01:38:16.552263] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.255 "name": "raid_bdev1", 00:18:08.255 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:08.255 "strip_size_kb": 0, 00:18:08.255 "state": "online", 00:18:08.255 "raid_level": "raid1", 00:18:08.255 "superblock": true, 00:18:08.255 "num_base_bdevs": 2, 00:18:08.255 "num_base_bdevs_discovered": 1, 00:18:08.255 "num_base_bdevs_operational": 1, 00:18:08.255 "base_bdevs_list": [ 00:18:08.255 { 00:18:08.255 "name": null, 00:18:08.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.255 "is_configured": false, 00:18:08.255 "data_offset": 0, 00:18:08.255 "data_size": 7936 00:18:08.255 }, 00:18:08.255 { 00:18:08.255 "name": "BaseBdev2", 00:18:08.255 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:08.255 "is_configured": true, 00:18:08.255 "data_offset": 256, 00:18:08.255 "data_size": 7936 00:18:08.255 } 00:18:08.255 ] 00:18:08.255 }' 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.255 01:38:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.825 01:38:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:08.825 01:38:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.825 01:38:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.825 [2024-11-17 01:38:17.035494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.825 [2024-11-17 01:38:17.035600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.825 [2024-11-17 01:38:17.035635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:08.825 [2024-11-17 01:38:17.035663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.825 [2024-11-17 01:38:17.036135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.825 [2024-11-17 01:38:17.036197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.825 [2024-11-17 01:38:17.036307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:08.825 [2024-11-17 01:38:17.036348] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.825 [2024-11-17 01:38:17.036386] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:08.825 [2024-11-17 01:38:17.036430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.825 [2024-11-17 01:38:17.050814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:08.825 spare 00:18:08.825 01:38:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.825 [2024-11-17 01:38:17.052597] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.825 01:38:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.765 "name": "raid_bdev1", 00:18:09.765 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:09.765 "strip_size_kb": 0, 00:18:09.765 "state": "online", 00:18:09.765 "raid_level": "raid1", 00:18:09.765 "superblock": true, 00:18:09.765 "num_base_bdevs": 2, 00:18:09.765 "num_base_bdevs_discovered": 2, 00:18:09.765 "num_base_bdevs_operational": 2, 00:18:09.765 "process": { 00:18:09.765 "type": "rebuild", 00:18:09.765 "target": "spare", 00:18:09.765 "progress": { 00:18:09.765 "blocks": 2560, 00:18:09.765 "percent": 32 00:18:09.765 } 00:18:09.765 }, 00:18:09.765 "base_bdevs_list": [ 00:18:09.765 { 00:18:09.765 "name": "spare", 00:18:09.765 "uuid": "b6e54079-9efe-534c-9145-17bc54c20d90", 00:18:09.765 "is_configured": true, 00:18:09.765 "data_offset": 256, 00:18:09.765 "data_size": 7936 00:18:09.765 }, 00:18:09.765 { 00:18:09.765 "name": "BaseBdev2", 00:18:09.765 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:09.765 "is_configured": true, 00:18:09.765 "data_offset": 256, 00:18:09.765 "data_size": 7936 00:18:09.765 } 00:18:09.765 ] 00:18:09.765 }' 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.765 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.765 [2024-11-17 01:38:18.212197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.025 [2024-11-17 01:38:18.257121] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.025 [2024-11-17 01:38:18.257216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.025 [2024-11-17 01:38:18.257252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.025 [2024-11-17 01:38:18.257271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.025 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.026 "name": "raid_bdev1", 00:18:10.026 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:10.026 "strip_size_kb": 0, 00:18:10.026 "state": "online", 00:18:10.026 "raid_level": "raid1", 00:18:10.026 "superblock": true, 00:18:10.026 "num_base_bdevs": 2, 00:18:10.026 "num_base_bdevs_discovered": 1, 00:18:10.026 "num_base_bdevs_operational": 1, 00:18:10.026 "base_bdevs_list": [ 00:18:10.026 { 00:18:10.026 "name": null, 00:18:10.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.026 "is_configured": false, 00:18:10.026 "data_offset": 0, 00:18:10.026 "data_size": 7936 00:18:10.026 }, 00:18:10.026 { 00:18:10.026 "name": "BaseBdev2", 00:18:10.026 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:10.026 "is_configured": true, 00:18:10.026 "data_offset": 256, 00:18:10.026 "data_size": 7936 00:18:10.026 } 00:18:10.026 ] 00:18:10.026 }' 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.026 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.596 "name": "raid_bdev1", 00:18:10.596 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:10.596 "strip_size_kb": 0, 00:18:10.596 "state": "online", 00:18:10.596 "raid_level": "raid1", 00:18:10.596 "superblock": true, 00:18:10.596 "num_base_bdevs": 2, 00:18:10.596 "num_base_bdevs_discovered": 1, 00:18:10.596 "num_base_bdevs_operational": 1, 00:18:10.596 "base_bdevs_list": [ 00:18:10.596 { 00:18:10.596 "name": null, 00:18:10.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.596 "is_configured": false, 00:18:10.596 "data_offset": 0, 00:18:10.596 "data_size": 7936 00:18:10.596 }, 00:18:10.596 { 00:18:10.596 "name": "BaseBdev2", 00:18:10.596 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:10.596 "is_configured": true, 00:18:10.596 "data_offset": 256, 00:18:10.596 "data_size": 7936 00:18:10.596 } 00:18:10.596 ] 00:18:10.596 }' 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.596 [2024-11-17 01:38:18.932340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.596 [2024-11-17 01:38:18.932438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.596 [2024-11-17 01:38:18.932477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:10.596 [2024-11-17 01:38:18.932516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.596 [2024-11-17 01:38:18.932963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.596 [2024-11-17 01:38:18.933024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.596 [2024-11-17 01:38:18.933128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:10.596 [2024-11-17 01:38:18.933166] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:10.596 [2024-11-17 01:38:18.933210] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:10.596 [2024-11-17 01:38:18.933241] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:10.596 BaseBdev1 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.596 01:38:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.536 "name": "raid_bdev1", 00:18:11.536 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:11.536 "strip_size_kb": 0, 00:18:11.536 "state": "online", 00:18:11.536 "raid_level": "raid1", 00:18:11.536 "superblock": true, 00:18:11.536 "num_base_bdevs": 2, 00:18:11.536 "num_base_bdevs_discovered": 1, 00:18:11.536 "num_base_bdevs_operational": 1, 00:18:11.536 "base_bdevs_list": [ 00:18:11.536 { 00:18:11.536 "name": null, 00:18:11.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.536 "is_configured": false, 00:18:11.536 "data_offset": 0, 00:18:11.536 "data_size": 7936 00:18:11.536 }, 00:18:11.536 { 00:18:11.536 "name": "BaseBdev2", 00:18:11.536 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:11.536 "is_configured": true, 00:18:11.536 "data_offset": 256, 00:18:11.536 "data_size": 7936 00:18:11.536 } 00:18:11.536 ] 00:18:11.536 }' 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.536 01:38:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.108 "name": "raid_bdev1", 00:18:12.108 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:12.108 "strip_size_kb": 0, 00:18:12.108 "state": "online", 00:18:12.108 "raid_level": "raid1", 00:18:12.108 "superblock": true, 00:18:12.108 "num_base_bdevs": 2, 00:18:12.108 "num_base_bdevs_discovered": 1, 00:18:12.108 "num_base_bdevs_operational": 1, 00:18:12.108 "base_bdevs_list": [ 00:18:12.108 { 00:18:12.108 "name": null, 00:18:12.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.108 "is_configured": false, 00:18:12.108 "data_offset": 0, 00:18:12.108 "data_size": 7936 00:18:12.108 }, 00:18:12.108 { 00:18:12.108 "name": "BaseBdev2", 00:18:12.108 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:12.108 "is_configured": true, 00:18:12.108 "data_offset": 256, 00:18:12.108 "data_size": 7936 00:18:12.108 } 00:18:12.108 ] 00:18:12.108 }' 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.108 [2024-11-17 01:38:20.553567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.108 [2024-11-17 01:38:20.553739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:12.108 [2024-11-17 01:38:20.553831] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:12.108 request: 00:18:12.108 { 00:18:12.108 "base_bdev": "BaseBdev1", 00:18:12.108 "raid_bdev": "raid_bdev1", 00:18:12.108 "method": "bdev_raid_add_base_bdev", 00:18:12.108 "req_id": 1 00:18:12.108 } 00:18:12.108 Got JSON-RPC error response 00:18:12.108 response: 00:18:12.108 { 00:18:12.108 "code": -22, 00:18:12.108 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:12.108 } 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.108 01:38:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.491 "name": "raid_bdev1", 00:18:13.491 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:13.491 "strip_size_kb": 0, 00:18:13.491 "state": "online", 00:18:13.491 "raid_level": "raid1", 00:18:13.491 "superblock": true, 00:18:13.491 "num_base_bdevs": 2, 00:18:13.491 "num_base_bdevs_discovered": 1, 00:18:13.491 "num_base_bdevs_operational": 1, 00:18:13.491 "base_bdevs_list": [ 00:18:13.491 { 00:18:13.491 "name": null, 00:18:13.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.491 "is_configured": false, 00:18:13.491 "data_offset": 0, 00:18:13.491 "data_size": 7936 00:18:13.491 }, 00:18:13.491 { 00:18:13.491 "name": "BaseBdev2", 00:18:13.491 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:13.491 "is_configured": true, 00:18:13.491 "data_offset": 256, 00:18:13.491 "data_size": 7936 00:18:13.491 } 00:18:13.491 ] 00:18:13.491 }' 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.491 01:38:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.751 "name": "raid_bdev1", 00:18:13.751 "uuid": "6f154520-9536-4d5f-92e5-02a7ffe292fa", 00:18:13.751 "strip_size_kb": 0, 00:18:13.751 "state": "online", 00:18:13.751 "raid_level": "raid1", 00:18:13.751 "superblock": true, 00:18:13.751 "num_base_bdevs": 2, 00:18:13.751 "num_base_bdevs_discovered": 1, 00:18:13.751 "num_base_bdevs_operational": 1, 00:18:13.751 "base_bdevs_list": [ 00:18:13.751 { 00:18:13.751 "name": null, 00:18:13.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.751 "is_configured": false, 00:18:13.751 "data_offset": 0, 00:18:13.751 "data_size": 7936 00:18:13.751 }, 00:18:13.751 { 00:18:13.751 "name": "BaseBdev2", 00:18:13.751 "uuid": "d5ee9636-53e5-5f20-bf8a-6b5247a95613", 00:18:13.751 "is_configured": true, 00:18:13.751 "data_offset": 256, 00:18:13.751 "data_size": 7936 00:18:13.751 } 00:18:13.751 ] 00:18:13.751 }' 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86236 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86236 ']' 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86236 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.751 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86236 00:18:14.011 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.011 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.011 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86236' 00:18:14.011 killing process with pid 86236 00:18:14.011 Received shutdown signal, test time was about 60.000000 seconds 00:18:14.011 00:18:14.011 Latency(us) 00:18:14.011 [2024-11-17T01:38:22.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.011 [2024-11-17T01:38:22.471Z] =================================================================================================================== 00:18:14.011 [2024-11-17T01:38:22.471Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.011 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86236 00:18:14.011 [2024-11-17 01:38:22.223022] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.011 [2024-11-17 01:38:22.223134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.011 [2024-11-17 01:38:22.223175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.011 [2024-11-17 01:38:22.223197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:14.011 01:38:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86236 00:18:14.271 [2024-11-17 01:38:22.503050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.210 01:38:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:15.210 00:18:15.210 real 0m19.694s 00:18:15.210 user 0m25.620s 00:18:15.210 sys 0m2.779s 00:18:15.210 01:38:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.210 ************************************ 00:18:15.210 END TEST raid_rebuild_test_sb_4k 00:18:15.210 ************************************ 00:18:15.210 01:38:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.210 01:38:23 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:15.210 01:38:23 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:15.210 01:38:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:15.210 01:38:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.210 01:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.210 ************************************ 00:18:15.210 START TEST raid_state_function_test_sb_md_separate 00:18:15.210 ************************************ 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:15.210 Process raid pid: 86927 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86927 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86927' 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86927 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86927 ']' 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.210 01:38:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.470 [2024-11-17 01:38:23.686900] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:15.470 [2024-11-17 01:38:23.687195] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.470 [2024-11-17 01:38:23.863335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.730 [2024-11-17 01:38:23.971203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.730 [2024-11-17 01:38:24.170023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.730 [2024-11-17 01:38:24.170107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.300 [2024-11-17 01:38:24.511301] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.300 [2024-11-17 01:38:24.511443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.300 [2024-11-17 01:38:24.511474] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.300 [2024-11-17 01:38:24.511498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.300 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.301 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.301 "name": "Existed_Raid", 00:18:16.301 "uuid": "eb23e2ff-5a7d-4d53-aa3e-4f737b6fb816", 00:18:16.301 "strip_size_kb": 0, 00:18:16.301 "state": "configuring", 00:18:16.301 "raid_level": "raid1", 00:18:16.301 "superblock": true, 00:18:16.301 "num_base_bdevs": 2, 00:18:16.301 "num_base_bdevs_discovered": 0, 00:18:16.301 "num_base_bdevs_operational": 2, 00:18:16.301 "base_bdevs_list": [ 00:18:16.301 { 00:18:16.301 "name": "BaseBdev1", 00:18:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.301 "is_configured": false, 00:18:16.301 "data_offset": 0, 00:18:16.301 "data_size": 0 00:18:16.301 }, 00:18:16.301 { 00:18:16.301 "name": "BaseBdev2", 00:18:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.301 "is_configured": false, 00:18:16.301 "data_offset": 0, 00:18:16.301 "data_size": 0 00:18:16.301 } 00:18:16.301 ] 00:18:16.301 }' 00:18:16.301 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.301 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:16.560 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.560 01:38:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 [2024-11-17 01:38:25.002351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.560 [2024-11-17 01:38:25.002431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:16.560 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.560 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:16.560 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.560 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 [2024-11-17 01:38:25.014333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.560 [2024-11-17 01:38:25.014372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.560 [2024-11-17 01:38:25.014380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.560 [2024-11-17 01:38:25.014391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.821 [2024-11-17 01:38:25.061865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.821 BaseBdev1 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.821 [ 00:18:16.821 { 00:18:16.821 "name": "BaseBdev1", 00:18:16.821 "aliases": [ 00:18:16.821 "9084f082-a078-4836-9c23-b8ef674ec4dd" 00:18:16.821 ], 00:18:16.821 "product_name": "Malloc disk", 00:18:16.821 "block_size": 4096, 00:18:16.821 "num_blocks": 8192, 00:18:16.821 "uuid": "9084f082-a078-4836-9c23-b8ef674ec4dd", 00:18:16.821 "md_size": 32, 00:18:16.821 "md_interleave": false, 00:18:16.821 "dif_type": 0, 00:18:16.821 "assigned_rate_limits": { 00:18:16.821 "rw_ios_per_sec": 0, 00:18:16.821 "rw_mbytes_per_sec": 0, 00:18:16.821 "r_mbytes_per_sec": 0, 00:18:16.821 "w_mbytes_per_sec": 0 00:18:16.821 }, 00:18:16.821 "claimed": true, 00:18:16.821 "claim_type": "exclusive_write", 00:18:16.821 "zoned": false, 00:18:16.821 "supported_io_types": { 00:18:16.821 "read": true, 00:18:16.821 "write": true, 00:18:16.821 "unmap": true, 00:18:16.821 "flush": true, 00:18:16.821 "reset": true, 00:18:16.821 "nvme_admin": false, 00:18:16.821 "nvme_io": false, 00:18:16.821 "nvme_io_md": false, 00:18:16.821 "write_zeroes": true, 00:18:16.821 "zcopy": true, 00:18:16.821 "get_zone_info": false, 00:18:16.821 "zone_management": false, 00:18:16.821 "zone_append": false, 00:18:16.821 "compare": false, 00:18:16.821 "compare_and_write": false, 00:18:16.821 "abort": true, 00:18:16.821 "seek_hole": false, 00:18:16.821 "seek_data": false, 00:18:16.821 "copy": true, 00:18:16.821 "nvme_iov_md": false 00:18:16.821 }, 00:18:16.821 "memory_domains": [ 00:18:16.821 { 00:18:16.821 "dma_device_id": "system", 00:18:16.821 "dma_device_type": 1 00:18:16.821 }, 00:18:16.821 { 00:18:16.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.821 "dma_device_type": 2 00:18:16.821 } 00:18:16.821 ], 00:18:16.821 "driver_specific": {} 00:18:16.821 } 00:18:16.821 ] 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.821 "name": "Existed_Raid", 00:18:16.821 "uuid": "21a69970-f0ce-415b-9665-8709c20fa221", 00:18:16.821 "strip_size_kb": 0, 00:18:16.821 "state": "configuring", 00:18:16.821 "raid_level": "raid1", 00:18:16.821 "superblock": true, 00:18:16.821 "num_base_bdevs": 2, 00:18:16.821 "num_base_bdevs_discovered": 1, 00:18:16.821 "num_base_bdevs_operational": 2, 00:18:16.821 "base_bdevs_list": [ 00:18:16.821 { 00:18:16.821 "name": "BaseBdev1", 00:18:16.821 "uuid": "9084f082-a078-4836-9c23-b8ef674ec4dd", 00:18:16.821 "is_configured": true, 00:18:16.821 "data_offset": 256, 00:18:16.821 "data_size": 7936 00:18:16.821 }, 00:18:16.821 { 00:18:16.821 "name": "BaseBdev2", 00:18:16.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.821 "is_configured": false, 00:18:16.821 "data_offset": 0, 00:18:16.821 "data_size": 0 00:18:16.821 } 00:18:16.821 ] 00:18:16.821 }' 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.821 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.391 [2024-11-17 01:38:25.569088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:17.391 [2024-11-17 01:38:25.569192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.391 [2024-11-17 01:38:25.581108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.391 [2024-11-17 01:38:25.582835] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.391 [2024-11-17 01:38:25.582906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.391 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.391 "name": "Existed_Raid", 00:18:17.391 "uuid": "c285c153-b171-49f8-aef4-49b3e5e55150", 00:18:17.391 "strip_size_kb": 0, 00:18:17.391 "state": "configuring", 00:18:17.391 "raid_level": "raid1", 00:18:17.391 "superblock": true, 00:18:17.392 "num_base_bdevs": 2, 00:18:17.392 "num_base_bdevs_discovered": 1, 00:18:17.392 "num_base_bdevs_operational": 2, 00:18:17.392 "base_bdevs_list": [ 00:18:17.392 { 00:18:17.392 "name": "BaseBdev1", 00:18:17.392 "uuid": "9084f082-a078-4836-9c23-b8ef674ec4dd", 00:18:17.392 "is_configured": true, 00:18:17.392 "data_offset": 256, 00:18:17.392 "data_size": 7936 00:18:17.392 }, 00:18:17.392 { 00:18:17.392 "name": "BaseBdev2", 00:18:17.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.392 "is_configured": false, 00:18:17.392 "data_offset": 0, 00:18:17.392 "data_size": 0 00:18:17.392 } 00:18:17.392 ] 00:18:17.392 }' 00:18:17.392 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.392 01:38:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.672 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:17.672 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.672 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.672 [2024-11-17 01:38:26.042392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.672 [2024-11-17 01:38:26.042613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:17.672 [2024-11-17 01:38:26.042628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:17.672 [2024-11-17 01:38:26.042709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:17.673 [2024-11-17 01:38:26.042846] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:17.673 [2024-11-17 01:38:26.042857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:17.673 [2024-11-17 01:38:26.042953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.673 BaseBdev2 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.673 [ 00:18:17.673 { 00:18:17.673 "name": "BaseBdev2", 00:18:17.673 "aliases": [ 00:18:17.673 "fabab276-4996-4035-bce2-2a75fea1ebe9" 00:18:17.673 ], 00:18:17.673 "product_name": "Malloc disk", 00:18:17.673 "block_size": 4096, 00:18:17.673 "num_blocks": 8192, 00:18:17.673 "uuid": "fabab276-4996-4035-bce2-2a75fea1ebe9", 00:18:17.673 "md_size": 32, 00:18:17.673 "md_interleave": false, 00:18:17.673 "dif_type": 0, 00:18:17.673 "assigned_rate_limits": { 00:18:17.673 "rw_ios_per_sec": 0, 00:18:17.673 "rw_mbytes_per_sec": 0, 00:18:17.673 "r_mbytes_per_sec": 0, 00:18:17.673 "w_mbytes_per_sec": 0 00:18:17.673 }, 00:18:17.673 "claimed": true, 00:18:17.673 "claim_type": "exclusive_write", 00:18:17.673 "zoned": false, 00:18:17.673 "supported_io_types": { 00:18:17.673 "read": true, 00:18:17.673 "write": true, 00:18:17.673 "unmap": true, 00:18:17.673 "flush": true, 00:18:17.673 "reset": true, 00:18:17.673 "nvme_admin": false, 00:18:17.673 "nvme_io": false, 00:18:17.673 "nvme_io_md": false, 00:18:17.673 "write_zeroes": true, 00:18:17.673 "zcopy": true, 00:18:17.673 "get_zone_info": false, 00:18:17.673 "zone_management": false, 00:18:17.673 "zone_append": false, 00:18:17.673 "compare": false, 00:18:17.673 "compare_and_write": false, 00:18:17.673 "abort": true, 00:18:17.673 "seek_hole": false, 00:18:17.673 "seek_data": false, 00:18:17.673 "copy": true, 00:18:17.673 "nvme_iov_md": false 00:18:17.673 }, 00:18:17.673 "memory_domains": [ 00:18:17.673 { 00:18:17.673 "dma_device_id": "system", 00:18:17.673 "dma_device_type": 1 00:18:17.673 }, 00:18:17.673 { 00:18:17.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.673 "dma_device_type": 2 00:18:17.673 } 00:18:17.673 ], 00:18:17.673 "driver_specific": {} 00:18:17.673 } 00:18:17.673 ] 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.673 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.974 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.974 "name": "Existed_Raid", 00:18:17.974 "uuid": "c285c153-b171-49f8-aef4-49b3e5e55150", 00:18:17.974 "strip_size_kb": 0, 00:18:17.974 "state": "online", 00:18:17.974 "raid_level": "raid1", 00:18:17.974 "superblock": true, 00:18:17.974 "num_base_bdevs": 2, 00:18:17.974 "num_base_bdevs_discovered": 2, 00:18:17.974 "num_base_bdevs_operational": 2, 00:18:17.974 "base_bdevs_list": [ 00:18:17.974 { 00:18:17.974 "name": "BaseBdev1", 00:18:17.974 "uuid": "9084f082-a078-4836-9c23-b8ef674ec4dd", 00:18:17.974 "is_configured": true, 00:18:17.974 "data_offset": 256, 00:18:17.974 "data_size": 7936 00:18:17.974 }, 00:18:17.974 { 00:18:17.974 "name": "BaseBdev2", 00:18:17.974 "uuid": "fabab276-4996-4035-bce2-2a75fea1ebe9", 00:18:17.974 "is_configured": true, 00:18:17.974 "data_offset": 256, 00:18:17.974 "data_size": 7936 00:18:17.974 } 00:18:17.974 ] 00:18:17.974 }' 00:18:17.974 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.974 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.248 [2024-11-17 01:38:26.565802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.248 "name": "Existed_Raid", 00:18:18.248 "aliases": [ 00:18:18.248 "c285c153-b171-49f8-aef4-49b3e5e55150" 00:18:18.248 ], 00:18:18.248 "product_name": "Raid Volume", 00:18:18.248 "block_size": 4096, 00:18:18.248 "num_blocks": 7936, 00:18:18.248 "uuid": "c285c153-b171-49f8-aef4-49b3e5e55150", 00:18:18.248 "md_size": 32, 00:18:18.248 "md_interleave": false, 00:18:18.248 "dif_type": 0, 00:18:18.248 "assigned_rate_limits": { 00:18:18.248 "rw_ios_per_sec": 0, 00:18:18.248 "rw_mbytes_per_sec": 0, 00:18:18.248 "r_mbytes_per_sec": 0, 00:18:18.248 "w_mbytes_per_sec": 0 00:18:18.248 }, 00:18:18.248 "claimed": false, 00:18:18.248 "zoned": false, 00:18:18.248 "supported_io_types": { 00:18:18.248 "read": true, 00:18:18.248 "write": true, 00:18:18.248 "unmap": false, 00:18:18.248 "flush": false, 00:18:18.248 "reset": true, 00:18:18.248 "nvme_admin": false, 00:18:18.248 "nvme_io": false, 00:18:18.248 "nvme_io_md": false, 00:18:18.248 "write_zeroes": true, 00:18:18.248 "zcopy": false, 00:18:18.248 "get_zone_info": false, 00:18:18.248 "zone_management": false, 00:18:18.248 "zone_append": false, 00:18:18.248 "compare": false, 00:18:18.248 "compare_and_write": false, 00:18:18.248 "abort": false, 00:18:18.248 "seek_hole": false, 00:18:18.248 "seek_data": false, 00:18:18.248 "copy": false, 00:18:18.248 "nvme_iov_md": false 00:18:18.248 }, 00:18:18.248 "memory_domains": [ 00:18:18.248 { 00:18:18.248 "dma_device_id": "system", 00:18:18.248 "dma_device_type": 1 00:18:18.248 }, 00:18:18.248 { 00:18:18.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.248 "dma_device_type": 2 00:18:18.248 }, 00:18:18.248 { 00:18:18.248 "dma_device_id": "system", 00:18:18.248 "dma_device_type": 1 00:18:18.248 }, 00:18:18.248 { 00:18:18.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.248 "dma_device_type": 2 00:18:18.248 } 00:18:18.248 ], 00:18:18.248 "driver_specific": { 00:18:18.248 "raid": { 00:18:18.248 "uuid": "c285c153-b171-49f8-aef4-49b3e5e55150", 00:18:18.248 "strip_size_kb": 0, 00:18:18.248 "state": "online", 00:18:18.248 "raid_level": "raid1", 00:18:18.248 "superblock": true, 00:18:18.248 "num_base_bdevs": 2, 00:18:18.248 "num_base_bdevs_discovered": 2, 00:18:18.248 "num_base_bdevs_operational": 2, 00:18:18.248 "base_bdevs_list": [ 00:18:18.248 { 00:18:18.248 "name": "BaseBdev1", 00:18:18.248 "uuid": "9084f082-a078-4836-9c23-b8ef674ec4dd", 00:18:18.248 "is_configured": true, 00:18:18.248 "data_offset": 256, 00:18:18.248 "data_size": 7936 00:18:18.248 }, 00:18:18.248 { 00:18:18.248 "name": "BaseBdev2", 00:18:18.248 "uuid": "fabab276-4996-4035-bce2-2a75fea1ebe9", 00:18:18.248 "is_configured": true, 00:18:18.248 "data_offset": 256, 00:18:18.248 "data_size": 7936 00:18:18.248 } 00:18:18.248 ] 00:18:18.248 } 00:18:18.248 } 00:18:18.248 }' 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:18.248 BaseBdev2' 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.248 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:18.249 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.249 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.509 [2024-11-17 01:38:26.797163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.509 "name": "Existed_Raid", 00:18:18.509 "uuid": "c285c153-b171-49f8-aef4-49b3e5e55150", 00:18:18.509 "strip_size_kb": 0, 00:18:18.509 "state": "online", 00:18:18.509 "raid_level": "raid1", 00:18:18.509 "superblock": true, 00:18:18.509 "num_base_bdevs": 2, 00:18:18.509 "num_base_bdevs_discovered": 1, 00:18:18.509 "num_base_bdevs_operational": 1, 00:18:18.509 "base_bdevs_list": [ 00:18:18.509 { 00:18:18.509 "name": null, 00:18:18.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.509 "is_configured": false, 00:18:18.509 "data_offset": 0, 00:18:18.509 "data_size": 7936 00:18:18.509 }, 00:18:18.509 { 00:18:18.509 "name": "BaseBdev2", 00:18:18.509 "uuid": "fabab276-4996-4035-bce2-2a75fea1ebe9", 00:18:18.509 "is_configured": true, 00:18:18.509 "data_offset": 256, 00:18:18.509 "data_size": 7936 00:18:18.509 } 00:18:18.509 ] 00:18:18.509 }' 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.509 01:38:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.080 [2024-11-17 01:38:27.437362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.080 [2024-11-17 01:38:27.437524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.080 [2024-11-17 01:38:27.533379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.080 [2024-11-17 01:38:27.533501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.080 [2024-11-17 01:38:27.533518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:19.080 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86927 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86927 ']' 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86927 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86927 00:18:19.340 killing process with pid 86927 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86927' 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86927 00:18:19.340 [2024-11-17 01:38:27.631898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.340 01:38:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86927 00:18:19.340 [2024-11-17 01:38:27.647313] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.281 01:38:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:20.281 00:18:20.281 real 0m5.097s 00:18:20.281 user 0m7.392s 00:18:20.281 sys 0m0.930s 00:18:20.281 01:38:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.281 01:38:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.281 ************************************ 00:18:20.281 END TEST raid_state_function_test_sb_md_separate 00:18:20.281 ************************************ 00:18:20.542 01:38:28 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:20.542 01:38:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:20.542 01:38:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.542 01:38:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.542 ************************************ 00:18:20.542 START TEST raid_superblock_test_md_separate 00:18:20.542 ************************************ 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87174 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87174 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87174 ']' 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.542 01:38:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.542 [2024-11-17 01:38:28.859771] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:20.542 [2024-11-17 01:38:28.859924] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87174 ] 00:18:20.804 [2024-11-17 01:38:29.037276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.804 [2024-11-17 01:38:29.151097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.064 [2024-11-17 01:38:29.333444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.064 [2024-11-17 01:38:29.333478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.323 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 malloc1 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 [2024-11-17 01:38:29.715514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.324 [2024-11-17 01:38:29.715668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.324 [2024-11-17 01:38:29.715706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:21.324 [2024-11-17 01:38:29.715735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.324 [2024-11-17 01:38:29.717534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.324 [2024-11-17 01:38:29.717571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.324 pt1 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 malloc2 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 [2024-11-17 01:38:29.768790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.324 [2024-11-17 01:38:29.768901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.324 [2024-11-17 01:38:29.768953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:21.324 [2024-11-17 01:38:29.768981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.324 [2024-11-17 01:38:29.770818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.324 [2024-11-17 01:38:29.770889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.324 pt2 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.324 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.324 [2024-11-17 01:38:29.780797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.585 [2024-11-17 01:38:29.782546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.585 [2024-11-17 01:38:29.782779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:21.585 [2024-11-17 01:38:29.782823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:21.585 [2024-11-17 01:38:29.782916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:21.585 [2024-11-17 01:38:29.783072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:21.585 [2024-11-17 01:38:29.783113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:21.585 [2024-11-17 01:38:29.783262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.585 "name": "raid_bdev1", 00:18:21.585 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:21.585 "strip_size_kb": 0, 00:18:21.585 "state": "online", 00:18:21.585 "raid_level": "raid1", 00:18:21.585 "superblock": true, 00:18:21.585 "num_base_bdevs": 2, 00:18:21.585 "num_base_bdevs_discovered": 2, 00:18:21.585 "num_base_bdevs_operational": 2, 00:18:21.585 "base_bdevs_list": [ 00:18:21.585 { 00:18:21.585 "name": "pt1", 00:18:21.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.585 "is_configured": true, 00:18:21.585 "data_offset": 256, 00:18:21.585 "data_size": 7936 00:18:21.585 }, 00:18:21.585 { 00:18:21.585 "name": "pt2", 00:18:21.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.585 "is_configured": true, 00:18:21.585 "data_offset": 256, 00:18:21.585 "data_size": 7936 00:18:21.585 } 00:18:21.585 ] 00:18:21.585 }' 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.585 01:38:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:21.845 [2024-11-17 01:38:30.256194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.845 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.845 "name": "raid_bdev1", 00:18:21.845 "aliases": [ 00:18:21.845 "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1" 00:18:21.845 ], 00:18:21.845 "product_name": "Raid Volume", 00:18:21.845 "block_size": 4096, 00:18:21.845 "num_blocks": 7936, 00:18:21.845 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:21.845 "md_size": 32, 00:18:21.845 "md_interleave": false, 00:18:21.845 "dif_type": 0, 00:18:21.845 "assigned_rate_limits": { 00:18:21.845 "rw_ios_per_sec": 0, 00:18:21.845 "rw_mbytes_per_sec": 0, 00:18:21.845 "r_mbytes_per_sec": 0, 00:18:21.845 "w_mbytes_per_sec": 0 00:18:21.845 }, 00:18:21.845 "claimed": false, 00:18:21.845 "zoned": false, 00:18:21.845 "supported_io_types": { 00:18:21.845 "read": true, 00:18:21.845 "write": true, 00:18:21.845 "unmap": false, 00:18:21.845 "flush": false, 00:18:21.845 "reset": true, 00:18:21.845 "nvme_admin": false, 00:18:21.845 "nvme_io": false, 00:18:21.845 "nvme_io_md": false, 00:18:21.845 "write_zeroes": true, 00:18:21.845 "zcopy": false, 00:18:21.845 "get_zone_info": false, 00:18:21.845 "zone_management": false, 00:18:21.845 "zone_append": false, 00:18:21.845 "compare": false, 00:18:21.845 "compare_and_write": false, 00:18:21.846 "abort": false, 00:18:21.846 "seek_hole": false, 00:18:21.846 "seek_data": false, 00:18:21.846 "copy": false, 00:18:21.846 "nvme_iov_md": false 00:18:21.846 }, 00:18:21.846 "memory_domains": [ 00:18:21.846 { 00:18:21.846 "dma_device_id": "system", 00:18:21.846 "dma_device_type": 1 00:18:21.846 }, 00:18:21.846 { 00:18:21.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.846 "dma_device_type": 2 00:18:21.846 }, 00:18:21.846 { 00:18:21.846 "dma_device_id": "system", 00:18:21.846 "dma_device_type": 1 00:18:21.846 }, 00:18:21.846 { 00:18:21.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.846 "dma_device_type": 2 00:18:21.846 } 00:18:21.846 ], 00:18:21.846 "driver_specific": { 00:18:21.846 "raid": { 00:18:21.846 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:21.846 "strip_size_kb": 0, 00:18:21.846 "state": "online", 00:18:21.846 "raid_level": "raid1", 00:18:21.846 "superblock": true, 00:18:21.846 "num_base_bdevs": 2, 00:18:21.846 "num_base_bdevs_discovered": 2, 00:18:21.846 "num_base_bdevs_operational": 2, 00:18:21.846 "base_bdevs_list": [ 00:18:21.846 { 00:18:21.846 "name": "pt1", 00:18:21.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.846 "is_configured": true, 00:18:21.846 "data_offset": 256, 00:18:21.846 "data_size": 7936 00:18:21.846 }, 00:18:21.846 { 00:18:21.846 "name": "pt2", 00:18:21.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.846 "is_configured": true, 00:18:21.846 "data_offset": 256, 00:18:21.846 "data_size": 7936 00:18:21.846 } 00:18:21.846 ] 00:18:21.846 } 00:18:21.846 } 00:18:21.846 }' 00:18:21.846 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:22.106 pt2' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.106 [2024-11-17 01:38:30.455813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ae1d6e1a-2141-4d9d-ab1f-56233c8042a1 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z ae1d6e1a-2141-4d9d-ab1f-56233c8042a1 ']' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.106 [2024-11-17 01:38:30.503485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.106 [2024-11-17 01:38:30.503555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.106 [2024-11-17 01:38:30.503649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.106 [2024-11-17 01:38:30.503732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.106 [2024-11-17 01:38:30.503765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.106 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.368 [2024-11-17 01:38:30.647282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:22.368 [2024-11-17 01:38:30.649078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:22.368 [2024-11-17 01:38:30.649207] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:22.368 [2024-11-17 01:38:30.649310] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:22.368 [2024-11-17 01:38:30.649325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.368 [2024-11-17 01:38:30.649335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:22.368 request: 00:18:22.368 { 00:18:22.368 "name": "raid_bdev1", 00:18:22.368 "raid_level": "raid1", 00:18:22.368 "base_bdevs": [ 00:18:22.368 "malloc1", 00:18:22.368 "malloc2" 00:18:22.368 ], 00:18:22.368 "superblock": false, 00:18:22.368 "method": "bdev_raid_create", 00:18:22.368 "req_id": 1 00:18:22.368 } 00:18:22.368 Got JSON-RPC error response 00:18:22.368 response: 00:18:22.368 { 00:18:22.368 "code": -17, 00:18:22.368 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:22.368 } 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.368 [2024-11-17 01:38:30.715140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.368 [2024-11-17 01:38:30.715258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.368 [2024-11-17 01:38:30.715290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:22.368 [2024-11-17 01:38:30.715319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.368 [2024-11-17 01:38:30.717132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.368 [2024-11-17 01:38:30.717227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.368 [2024-11-17 01:38:30.717284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:22.368 [2024-11-17 01:38:30.717355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.368 pt1 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.368 "name": "raid_bdev1", 00:18:22.368 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:22.368 "strip_size_kb": 0, 00:18:22.368 "state": "configuring", 00:18:22.368 "raid_level": "raid1", 00:18:22.368 "superblock": true, 00:18:22.368 "num_base_bdevs": 2, 00:18:22.368 "num_base_bdevs_discovered": 1, 00:18:22.368 "num_base_bdevs_operational": 2, 00:18:22.368 "base_bdevs_list": [ 00:18:22.368 { 00:18:22.368 "name": "pt1", 00:18:22.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.368 "is_configured": true, 00:18:22.368 "data_offset": 256, 00:18:22.368 "data_size": 7936 00:18:22.368 }, 00:18:22.368 { 00:18:22.368 "name": null, 00:18:22.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.368 "is_configured": false, 00:18:22.368 "data_offset": 256, 00:18:22.368 "data_size": 7936 00:18:22.368 } 00:18:22.368 ] 00:18:22.368 }' 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.368 01:38:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.939 [2024-11-17 01:38:31.194288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.939 [2024-11-17 01:38:31.194409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.939 [2024-11-17 01:38:31.194442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:22.939 [2024-11-17 01:38:31.194471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.939 [2024-11-17 01:38:31.194660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.939 [2024-11-17 01:38:31.194716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.939 [2024-11-17 01:38:31.194793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:22.939 [2024-11-17 01:38:31.194841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.939 [2024-11-17 01:38:31.194978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:22.939 [2024-11-17 01:38:31.195016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:22.939 [2024-11-17 01:38:31.195094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:22.939 [2024-11-17 01:38:31.195244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:22.939 [2024-11-17 01:38:31.195282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:22.939 [2024-11-17 01:38:31.195408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.939 pt2 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.939 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.940 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.940 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.940 "name": "raid_bdev1", 00:18:22.940 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:22.940 "strip_size_kb": 0, 00:18:22.940 "state": "online", 00:18:22.940 "raid_level": "raid1", 00:18:22.940 "superblock": true, 00:18:22.940 "num_base_bdevs": 2, 00:18:22.940 "num_base_bdevs_discovered": 2, 00:18:22.940 "num_base_bdevs_operational": 2, 00:18:22.940 "base_bdevs_list": [ 00:18:22.940 { 00:18:22.940 "name": "pt1", 00:18:22.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.940 "is_configured": true, 00:18:22.940 "data_offset": 256, 00:18:22.940 "data_size": 7936 00:18:22.940 }, 00:18:22.940 { 00:18:22.940 "name": "pt2", 00:18:22.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.940 "is_configured": true, 00:18:22.940 "data_offset": 256, 00:18:22.940 "data_size": 7936 00:18:22.940 } 00:18:22.940 ] 00:18:22.940 }' 00:18:22.940 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.940 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.200 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.200 [2024-11-17 01:38:31.653756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.461 "name": "raid_bdev1", 00:18:23.461 "aliases": [ 00:18:23.461 "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1" 00:18:23.461 ], 00:18:23.461 "product_name": "Raid Volume", 00:18:23.461 "block_size": 4096, 00:18:23.461 "num_blocks": 7936, 00:18:23.461 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:23.461 "md_size": 32, 00:18:23.461 "md_interleave": false, 00:18:23.461 "dif_type": 0, 00:18:23.461 "assigned_rate_limits": { 00:18:23.461 "rw_ios_per_sec": 0, 00:18:23.461 "rw_mbytes_per_sec": 0, 00:18:23.461 "r_mbytes_per_sec": 0, 00:18:23.461 "w_mbytes_per_sec": 0 00:18:23.461 }, 00:18:23.461 "claimed": false, 00:18:23.461 "zoned": false, 00:18:23.461 "supported_io_types": { 00:18:23.461 "read": true, 00:18:23.461 "write": true, 00:18:23.461 "unmap": false, 00:18:23.461 "flush": false, 00:18:23.461 "reset": true, 00:18:23.461 "nvme_admin": false, 00:18:23.461 "nvme_io": false, 00:18:23.461 "nvme_io_md": false, 00:18:23.461 "write_zeroes": true, 00:18:23.461 "zcopy": false, 00:18:23.461 "get_zone_info": false, 00:18:23.461 "zone_management": false, 00:18:23.461 "zone_append": false, 00:18:23.461 "compare": false, 00:18:23.461 "compare_and_write": false, 00:18:23.461 "abort": false, 00:18:23.461 "seek_hole": false, 00:18:23.461 "seek_data": false, 00:18:23.461 "copy": false, 00:18:23.461 "nvme_iov_md": false 00:18:23.461 }, 00:18:23.461 "memory_domains": [ 00:18:23.461 { 00:18:23.461 "dma_device_id": "system", 00:18:23.461 "dma_device_type": 1 00:18:23.461 }, 00:18:23.461 { 00:18:23.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.461 "dma_device_type": 2 00:18:23.461 }, 00:18:23.461 { 00:18:23.461 "dma_device_id": "system", 00:18:23.461 "dma_device_type": 1 00:18:23.461 }, 00:18:23.461 { 00:18:23.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.461 "dma_device_type": 2 00:18:23.461 } 00:18:23.461 ], 00:18:23.461 "driver_specific": { 00:18:23.461 "raid": { 00:18:23.461 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:23.461 "strip_size_kb": 0, 00:18:23.461 "state": "online", 00:18:23.461 "raid_level": "raid1", 00:18:23.461 "superblock": true, 00:18:23.461 "num_base_bdevs": 2, 00:18:23.461 "num_base_bdevs_discovered": 2, 00:18:23.461 "num_base_bdevs_operational": 2, 00:18:23.461 "base_bdevs_list": [ 00:18:23.461 { 00:18:23.461 "name": "pt1", 00:18:23.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.461 "is_configured": true, 00:18:23.461 "data_offset": 256, 00:18:23.461 "data_size": 7936 00:18:23.461 }, 00:18:23.461 { 00:18:23.461 "name": "pt2", 00:18:23.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.461 "is_configured": true, 00:18:23.461 "data_offset": 256, 00:18:23.461 "data_size": 7936 00:18:23.461 } 00:18:23.461 ] 00:18:23.461 } 00:18:23.461 } 00:18:23.461 }' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:23.461 pt2' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.461 [2024-11-17 01:38:31.893349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' ae1d6e1a-2141-4d9d-ab1f-56233c8042a1 '!=' ae1d6e1a-2141-4d9d-ab1f-56233c8042a1 ']' 00:18:23.461 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.722 [2024-11-17 01:38:31.925090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.722 "name": "raid_bdev1", 00:18:23.722 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:23.722 "strip_size_kb": 0, 00:18:23.722 "state": "online", 00:18:23.722 "raid_level": "raid1", 00:18:23.722 "superblock": true, 00:18:23.722 "num_base_bdevs": 2, 00:18:23.722 "num_base_bdevs_discovered": 1, 00:18:23.722 "num_base_bdevs_operational": 1, 00:18:23.722 "base_bdevs_list": [ 00:18:23.722 { 00:18:23.722 "name": null, 00:18:23.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.722 "is_configured": false, 00:18:23.722 "data_offset": 0, 00:18:23.722 "data_size": 7936 00:18:23.722 }, 00:18:23.722 { 00:18:23.722 "name": "pt2", 00:18:23.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.722 "is_configured": true, 00:18:23.722 "data_offset": 256, 00:18:23.722 "data_size": 7936 00:18:23.722 } 00:18:23.722 ] 00:18:23.722 }' 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.722 01:38:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.982 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.982 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.983 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.983 [2024-11-17 01:38:32.420234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.983 [2024-11-17 01:38:32.420301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.983 [2024-11-17 01:38:32.420370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.983 [2024-11-17 01:38:32.420420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.983 [2024-11-17 01:38:32.420452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:23.983 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.983 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.983 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:23.983 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.983 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.243 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.243 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:24.243 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:24.243 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:24.243 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:24.243 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.244 [2024-11-17 01:38:32.496120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:24.244 [2024-11-17 01:38:32.496219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.244 [2024-11-17 01:38:32.496268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:24.244 [2024-11-17 01:38:32.496305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.244 [2024-11-17 01:38:32.498150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.244 [2024-11-17 01:38:32.498215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:24.244 [2024-11-17 01:38:32.498273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:24.244 [2024-11-17 01:38:32.498339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.244 pt2 00:18:24.244 [2024-11-17 01:38:32.498447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:24.244 [2024-11-17 01:38:32.498462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:24.244 [2024-11-17 01:38:32.498530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:24.244 [2024-11-17 01:38:32.498636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:24.244 [2024-11-17 01:38:32.498643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:24.244 [2024-11-17 01:38:32.498722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.244 "name": "raid_bdev1", 00:18:24.244 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:24.244 "strip_size_kb": 0, 00:18:24.244 "state": "online", 00:18:24.244 "raid_level": "raid1", 00:18:24.244 "superblock": true, 00:18:24.244 "num_base_bdevs": 2, 00:18:24.244 "num_base_bdevs_discovered": 1, 00:18:24.244 "num_base_bdevs_operational": 1, 00:18:24.244 "base_bdevs_list": [ 00:18:24.244 { 00:18:24.244 "name": null, 00:18:24.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.244 "is_configured": false, 00:18:24.244 "data_offset": 256, 00:18:24.244 "data_size": 7936 00:18:24.244 }, 00:18:24.244 { 00:18:24.244 "name": "pt2", 00:18:24.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.244 "is_configured": true, 00:18:24.244 "data_offset": 256, 00:18:24.244 "data_size": 7936 00:18:24.244 } 00:18:24.244 ] 00:18:24.244 }' 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.244 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.504 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.505 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.505 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.505 [2024-11-17 01:38:32.943336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.505 [2024-11-17 01:38:32.943403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.505 [2024-11-17 01:38:32.943465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.505 [2024-11-17 01:38:32.943515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.505 [2024-11-17 01:38:32.943562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:24.505 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.505 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.505 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:24.505 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.505 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.765 [2024-11-17 01:38:32.995302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.765 [2024-11-17 01:38:32.995408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.765 [2024-11-17 01:38:32.995439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:24.765 [2024-11-17 01:38:32.995465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.765 [2024-11-17 01:38:32.997291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.765 [2024-11-17 01:38:32.997372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.765 [2024-11-17 01:38:32.997434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:24.765 [2024-11-17 01:38:32.997493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.765 [2024-11-17 01:38:32.997627] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:24.765 [2024-11-17 01:38:32.997680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.765 [2024-11-17 01:38:32.997710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:24.765 [2024-11-17 01:38:32.997836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.765 [2024-11-17 01:38:32.997930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:24.765 [2024-11-17 01:38:32.997965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:24.765 [2024-11-17 01:38:32.998044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:24.765 [2024-11-17 01:38:32.998162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:24.765 [2024-11-17 01:38:32.998199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:24.765 [2024-11-17 01:38:32.998340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.765 pt1 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.765 01:38:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.765 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.765 "name": "raid_bdev1", 00:18:24.765 "uuid": "ae1d6e1a-2141-4d9d-ab1f-56233c8042a1", 00:18:24.765 "strip_size_kb": 0, 00:18:24.765 "state": "online", 00:18:24.765 "raid_level": "raid1", 00:18:24.765 "superblock": true, 00:18:24.765 "num_base_bdevs": 2, 00:18:24.766 "num_base_bdevs_discovered": 1, 00:18:24.766 "num_base_bdevs_operational": 1, 00:18:24.766 "base_bdevs_list": [ 00:18:24.766 { 00:18:24.766 "name": null, 00:18:24.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.766 "is_configured": false, 00:18:24.766 "data_offset": 256, 00:18:24.766 "data_size": 7936 00:18:24.766 }, 00:18:24.766 { 00:18:24.766 "name": "pt2", 00:18:24.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.766 "is_configured": true, 00:18:24.766 "data_offset": 256, 00:18:24.766 "data_size": 7936 00:18:24.766 } 00:18:24.766 ] 00:18:24.766 }' 00:18:24.766 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.766 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.026 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:25.026 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.026 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.026 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:25.286 [2024-11-17 01:38:33.526638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' ae1d6e1a-2141-4d9d-ab1f-56233c8042a1 '!=' ae1d6e1a-2141-4d9d-ab1f-56233c8042a1 ']' 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87174 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87174 ']' 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87174 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87174 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.286 killing process with pid 87174 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87174' 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87174 00:18:25.286 [2024-11-17 01:38:33.605149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.286 [2024-11-17 01:38:33.605211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.286 [2024-11-17 01:38:33.605245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.286 [2024-11-17 01:38:33.605262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:25.286 01:38:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87174 00:18:25.546 [2024-11-17 01:38:33.812447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.487 01:38:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:26.487 00:18:26.487 real 0m6.075s 00:18:26.487 user 0m9.248s 00:18:26.487 sys 0m1.150s 00:18:26.487 01:38:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.487 01:38:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.487 ************************************ 00:18:26.487 END TEST raid_superblock_test_md_separate 00:18:26.487 ************************************ 00:18:26.487 01:38:34 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:26.487 01:38:34 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:26.487 01:38:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:26.487 01:38:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.487 01:38:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.487 ************************************ 00:18:26.487 START TEST raid_rebuild_test_sb_md_separate 00:18:26.487 ************************************ 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87503 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87503 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87503 ']' 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.487 01:38:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.747 [2024-11-17 01:38:35.012197] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:26.747 [2024-11-17 01:38:35.012414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:26.747 Zero copy mechanism will not be used. 00:18:26.747 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87503 ] 00:18:26.747 [2024-11-17 01:38:35.185535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.007 [2024-11-17 01:38:35.287754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.267 [2024-11-17 01:38:35.484871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.267 [2024-11-17 01:38:35.484952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.527 BaseBdev1_malloc 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.527 [2024-11-17 01:38:35.864987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:27.527 [2024-11-17 01:38:35.865127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.527 [2024-11-17 01:38:35.865166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:27.527 [2024-11-17 01:38:35.865194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.527 [2024-11-17 01:38:35.866979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.527 [2024-11-17 01:38:35.867064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.527 BaseBdev1 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.527 BaseBdev2_malloc 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.527 [2024-11-17 01:38:35.920694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:27.527 [2024-11-17 01:38:35.920830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.527 [2024-11-17 01:38:35.920867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:27.527 [2024-11-17 01:38:35.920920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.527 [2024-11-17 01:38:35.922687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.527 [2024-11-17 01:38:35.922763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:27.527 BaseBdev2 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.527 spare_malloc 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.527 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.788 spare_delay 00:18:27.788 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.788 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:27.788 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.788 01:38:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.788 [2024-11-17 01:38:35.999291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.788 [2024-11-17 01:38:35.999400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.788 [2024-11-17 01:38:35.999447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:27.788 [2024-11-17 01:38:35.999480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.788 [2024-11-17 01:38:36.001297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.788 [2024-11-17 01:38:36.001368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.788 spare 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.788 [2024-11-17 01:38:36.011325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.788 [2024-11-17 01:38:36.013093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.788 [2024-11-17 01:38:36.013303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:27.788 [2024-11-17 01:38:36.013339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:27.788 [2024-11-17 01:38:36.013444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:27.788 [2024-11-17 01:38:36.013596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:27.788 [2024-11-17 01:38:36.013632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:27.788 [2024-11-17 01:38:36.013798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.788 "name": "raid_bdev1", 00:18:27.788 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:27.788 "strip_size_kb": 0, 00:18:27.788 "state": "online", 00:18:27.788 "raid_level": "raid1", 00:18:27.788 "superblock": true, 00:18:27.788 "num_base_bdevs": 2, 00:18:27.788 "num_base_bdevs_discovered": 2, 00:18:27.788 "num_base_bdevs_operational": 2, 00:18:27.788 "base_bdevs_list": [ 00:18:27.788 { 00:18:27.788 "name": "BaseBdev1", 00:18:27.788 "uuid": "e2adf9f0-3e2a-5d32-9e1d-da6b66aa1390", 00:18:27.788 "is_configured": true, 00:18:27.788 "data_offset": 256, 00:18:27.788 "data_size": 7936 00:18:27.788 }, 00:18:27.788 { 00:18:27.788 "name": "BaseBdev2", 00:18:27.788 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:27.788 "is_configured": true, 00:18:27.788 "data_offset": 256, 00:18:27.788 "data_size": 7936 00:18:27.788 } 00:18:27.788 ] 00:18:27.788 }' 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.788 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.048 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.048 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:28.048 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.048 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.048 [2024-11-17 01:38:36.494809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:28.308 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:28.308 [2024-11-17 01:38:36.734206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.308 /dev/nbd0 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:28.568 1+0 records in 00:18:28.568 1+0 records out 00:18:28.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421693 s, 9.7 MB/s 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:28.568 01:38:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:29.141 7936+0 records in 00:18:29.141 7936+0 records out 00:18:29.141 32505856 bytes (33 MB, 31 MiB) copied, 0.634667 s, 51.2 MB/s 00:18:29.141 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:29.141 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.141 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:29.141 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:29.141 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:29.141 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.141 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:29.402 [2024-11-17 01:38:37.649994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.402 [2024-11-17 01:38:37.666654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.402 "name": "raid_bdev1", 00:18:29.402 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:29.402 "strip_size_kb": 0, 00:18:29.402 "state": "online", 00:18:29.402 "raid_level": "raid1", 00:18:29.402 "superblock": true, 00:18:29.402 "num_base_bdevs": 2, 00:18:29.402 "num_base_bdevs_discovered": 1, 00:18:29.402 "num_base_bdevs_operational": 1, 00:18:29.402 "base_bdevs_list": [ 00:18:29.402 { 00:18:29.402 "name": null, 00:18:29.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.402 "is_configured": false, 00:18:29.402 "data_offset": 0, 00:18:29.402 "data_size": 7936 00:18:29.402 }, 00:18:29.402 { 00:18:29.402 "name": "BaseBdev2", 00:18:29.402 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:29.402 "is_configured": true, 00:18:29.402 "data_offset": 256, 00:18:29.402 "data_size": 7936 00:18:29.402 } 00:18:29.402 ] 00:18:29.402 }' 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.402 01:38:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.973 01:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:29.973 01:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.973 01:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.973 [2024-11-17 01:38:38.157786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.973 [2024-11-17 01:38:38.171708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:29.973 01:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.973 01:38:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:29.973 [2024-11-17 01:38:38.173535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.912 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.912 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.912 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.912 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.912 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.913 "name": "raid_bdev1", 00:18:30.913 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:30.913 "strip_size_kb": 0, 00:18:30.913 "state": "online", 00:18:30.913 "raid_level": "raid1", 00:18:30.913 "superblock": true, 00:18:30.913 "num_base_bdevs": 2, 00:18:30.913 "num_base_bdevs_discovered": 2, 00:18:30.913 "num_base_bdevs_operational": 2, 00:18:30.913 "process": { 00:18:30.913 "type": "rebuild", 00:18:30.913 "target": "spare", 00:18:30.913 "progress": { 00:18:30.913 "blocks": 2560, 00:18:30.913 "percent": 32 00:18:30.913 } 00:18:30.913 }, 00:18:30.913 "base_bdevs_list": [ 00:18:30.913 { 00:18:30.913 "name": "spare", 00:18:30.913 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:30.913 "is_configured": true, 00:18:30.913 "data_offset": 256, 00:18:30.913 "data_size": 7936 00:18:30.913 }, 00:18:30.913 { 00:18:30.913 "name": "BaseBdev2", 00:18:30.913 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:30.913 "is_configured": true, 00:18:30.913 "data_offset": 256, 00:18:30.913 "data_size": 7936 00:18:30.913 } 00:18:30.913 ] 00:18:30.913 }' 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.913 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.913 [2024-11-17 01:38:39.337156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.172 [2024-11-17 01:38:39.378114] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:31.172 [2024-11-17 01:38:39.378216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.172 [2024-11-17 01:38:39.378248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:31.172 [2024-11-17 01:38:39.378270] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.172 "name": "raid_bdev1", 00:18:31.172 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:31.172 "strip_size_kb": 0, 00:18:31.172 "state": "online", 00:18:31.172 "raid_level": "raid1", 00:18:31.172 "superblock": true, 00:18:31.172 "num_base_bdevs": 2, 00:18:31.172 "num_base_bdevs_discovered": 1, 00:18:31.172 "num_base_bdevs_operational": 1, 00:18:31.172 "base_bdevs_list": [ 00:18:31.172 { 00:18:31.172 "name": null, 00:18:31.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.172 "is_configured": false, 00:18:31.172 "data_offset": 0, 00:18:31.172 "data_size": 7936 00:18:31.172 }, 00:18:31.172 { 00:18:31.172 "name": "BaseBdev2", 00:18:31.172 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:31.172 "is_configured": true, 00:18:31.172 "data_offset": 256, 00:18:31.172 "data_size": 7936 00:18:31.172 } 00:18:31.172 ] 00:18:31.172 }' 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.172 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.432 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.433 "name": "raid_bdev1", 00:18:31.433 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:31.433 "strip_size_kb": 0, 00:18:31.433 "state": "online", 00:18:31.433 "raid_level": "raid1", 00:18:31.433 "superblock": true, 00:18:31.433 "num_base_bdevs": 2, 00:18:31.433 "num_base_bdevs_discovered": 1, 00:18:31.433 "num_base_bdevs_operational": 1, 00:18:31.433 "base_bdevs_list": [ 00:18:31.433 { 00:18:31.433 "name": null, 00:18:31.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.433 "is_configured": false, 00:18:31.433 "data_offset": 0, 00:18:31.433 "data_size": 7936 00:18:31.433 }, 00:18:31.433 { 00:18:31.433 "name": "BaseBdev2", 00:18:31.433 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:31.433 "is_configured": true, 00:18:31.433 "data_offset": 256, 00:18:31.433 "data_size": 7936 00:18:31.433 } 00:18:31.433 ] 00:18:31.433 }' 00:18:31.433 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.692 [2024-11-17 01:38:39.975851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.692 [2024-11-17 01:38:39.989204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.692 01:38:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:31.692 [2024-11-17 01:38:39.990995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.631 01:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.631 01:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.631 01:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.631 01:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.631 01:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.631 01:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.631 01:38:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.631 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.631 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.631 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.631 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.631 "name": "raid_bdev1", 00:18:32.631 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:32.631 "strip_size_kb": 0, 00:18:32.631 "state": "online", 00:18:32.631 "raid_level": "raid1", 00:18:32.631 "superblock": true, 00:18:32.631 "num_base_bdevs": 2, 00:18:32.631 "num_base_bdevs_discovered": 2, 00:18:32.631 "num_base_bdevs_operational": 2, 00:18:32.631 "process": { 00:18:32.631 "type": "rebuild", 00:18:32.631 "target": "spare", 00:18:32.631 "progress": { 00:18:32.631 "blocks": 2560, 00:18:32.631 "percent": 32 00:18:32.631 } 00:18:32.631 }, 00:18:32.631 "base_bdevs_list": [ 00:18:32.631 { 00:18:32.631 "name": "spare", 00:18:32.631 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:32.631 "is_configured": true, 00:18:32.631 "data_offset": 256, 00:18:32.631 "data_size": 7936 00:18:32.631 }, 00:18:32.631 { 00:18:32.631 "name": "BaseBdev2", 00:18:32.631 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:32.631 "is_configured": true, 00:18:32.631 "data_offset": 256, 00:18:32.631 "data_size": 7936 00:18:32.631 } 00:18:32.631 ] 00:18:32.631 }' 00:18:32.631 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:32.892 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=695 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.892 "name": "raid_bdev1", 00:18:32.892 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:32.892 "strip_size_kb": 0, 00:18:32.892 "state": "online", 00:18:32.892 "raid_level": "raid1", 00:18:32.892 "superblock": true, 00:18:32.892 "num_base_bdevs": 2, 00:18:32.892 "num_base_bdevs_discovered": 2, 00:18:32.892 "num_base_bdevs_operational": 2, 00:18:32.892 "process": { 00:18:32.892 "type": "rebuild", 00:18:32.892 "target": "spare", 00:18:32.892 "progress": { 00:18:32.892 "blocks": 2816, 00:18:32.892 "percent": 35 00:18:32.892 } 00:18:32.892 }, 00:18:32.892 "base_bdevs_list": [ 00:18:32.892 { 00:18:32.892 "name": "spare", 00:18:32.892 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:32.892 "is_configured": true, 00:18:32.892 "data_offset": 256, 00:18:32.892 "data_size": 7936 00:18:32.892 }, 00:18:32.892 { 00:18:32.892 "name": "BaseBdev2", 00:18:32.892 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:32.892 "is_configured": true, 00:18:32.892 "data_offset": 256, 00:18:32.892 "data_size": 7936 00:18:32.892 } 00:18:32.892 ] 00:18:32.892 }' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.892 01:38:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.833 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.094 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.094 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.094 "name": "raid_bdev1", 00:18:34.094 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:34.094 "strip_size_kb": 0, 00:18:34.094 "state": "online", 00:18:34.094 "raid_level": "raid1", 00:18:34.094 "superblock": true, 00:18:34.094 "num_base_bdevs": 2, 00:18:34.094 "num_base_bdevs_discovered": 2, 00:18:34.094 "num_base_bdevs_operational": 2, 00:18:34.094 "process": { 00:18:34.094 "type": "rebuild", 00:18:34.094 "target": "spare", 00:18:34.094 "progress": { 00:18:34.094 "blocks": 5632, 00:18:34.094 "percent": 70 00:18:34.094 } 00:18:34.094 }, 00:18:34.094 "base_bdevs_list": [ 00:18:34.094 { 00:18:34.094 "name": "spare", 00:18:34.094 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:34.094 "is_configured": true, 00:18:34.094 "data_offset": 256, 00:18:34.094 "data_size": 7936 00:18:34.094 }, 00:18:34.094 { 00:18:34.094 "name": "BaseBdev2", 00:18:34.094 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:34.094 "is_configured": true, 00:18:34.094 "data_offset": 256, 00:18:34.094 "data_size": 7936 00:18:34.094 } 00:18:34.094 ] 00:18:34.094 }' 00:18:34.094 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.094 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.094 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.094 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.094 01:38:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.663 [2024-11-17 01:38:43.102115] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:34.663 [2024-11-17 01:38:43.102191] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:34.663 [2024-11-17 01:38:43.102295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.232 "name": "raid_bdev1", 00:18:35.232 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:35.232 "strip_size_kb": 0, 00:18:35.232 "state": "online", 00:18:35.232 "raid_level": "raid1", 00:18:35.232 "superblock": true, 00:18:35.232 "num_base_bdevs": 2, 00:18:35.232 "num_base_bdevs_discovered": 2, 00:18:35.232 "num_base_bdevs_operational": 2, 00:18:35.232 "base_bdevs_list": [ 00:18:35.232 { 00:18:35.232 "name": "spare", 00:18:35.232 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:35.232 "is_configured": true, 00:18:35.232 "data_offset": 256, 00:18:35.232 "data_size": 7936 00:18:35.232 }, 00:18:35.232 { 00:18:35.232 "name": "BaseBdev2", 00:18:35.232 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:35.232 "is_configured": true, 00:18:35.232 "data_offset": 256, 00:18:35.232 "data_size": 7936 00:18:35.232 } 00:18:35.232 ] 00:18:35.232 }' 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.232 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.233 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.233 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.233 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.233 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.233 "name": "raid_bdev1", 00:18:35.233 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:35.233 "strip_size_kb": 0, 00:18:35.233 "state": "online", 00:18:35.233 "raid_level": "raid1", 00:18:35.233 "superblock": true, 00:18:35.233 "num_base_bdevs": 2, 00:18:35.233 "num_base_bdevs_discovered": 2, 00:18:35.233 "num_base_bdevs_operational": 2, 00:18:35.233 "base_bdevs_list": [ 00:18:35.233 { 00:18:35.233 "name": "spare", 00:18:35.233 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:35.233 "is_configured": true, 00:18:35.233 "data_offset": 256, 00:18:35.233 "data_size": 7936 00:18:35.233 }, 00:18:35.233 { 00:18:35.233 "name": "BaseBdev2", 00:18:35.233 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:35.233 "is_configured": true, 00:18:35.233 "data_offset": 256, 00:18:35.233 "data_size": 7936 00:18:35.233 } 00:18:35.233 ] 00:18:35.233 }' 00:18:35.233 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.233 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.233 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.493 "name": "raid_bdev1", 00:18:35.493 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:35.493 "strip_size_kb": 0, 00:18:35.493 "state": "online", 00:18:35.493 "raid_level": "raid1", 00:18:35.493 "superblock": true, 00:18:35.493 "num_base_bdevs": 2, 00:18:35.493 "num_base_bdevs_discovered": 2, 00:18:35.493 "num_base_bdevs_operational": 2, 00:18:35.493 "base_bdevs_list": [ 00:18:35.493 { 00:18:35.493 "name": "spare", 00:18:35.493 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:35.493 "is_configured": true, 00:18:35.493 "data_offset": 256, 00:18:35.493 "data_size": 7936 00:18:35.493 }, 00:18:35.493 { 00:18:35.493 "name": "BaseBdev2", 00:18:35.493 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:35.493 "is_configured": true, 00:18:35.493 "data_offset": 256, 00:18:35.493 "data_size": 7936 00:18:35.493 } 00:18:35.493 ] 00:18:35.493 }' 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.493 01:38:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.753 [2024-11-17 01:38:44.155342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:35.753 [2024-11-17 01:38:44.155435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.753 [2024-11-17 01:38:44.155522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.753 [2024-11-17 01:38:44.155612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.753 [2024-11-17 01:38:44.155657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:35.753 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:35.754 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:36.014 /dev/nbd0 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.014 1+0 records in 00:18:36.014 1+0 records out 00:18:36.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328963 s, 12.5 MB/s 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.014 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:36.273 /dev/nbd1 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.273 1+0 records in 00:18:36.273 1+0 records out 00:18:36.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395965 s, 10.3 MB/s 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.273 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:36.549 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:36.549 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.549 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:36.549 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:36.549 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:36.549 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:36.549 01:38:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:36.852 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.131 [2024-11-17 01:38:45.338389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:37.131 [2024-11-17 01:38:45.338446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.131 [2024-11-17 01:38:45.338467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:37.131 [2024-11-17 01:38:45.338475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.131 [2024-11-17 01:38:45.340385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.131 [2024-11-17 01:38:45.340428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:37.131 [2024-11-17 01:38:45.340490] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:37.131 [2024-11-17 01:38:45.340545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.131 [2024-11-17 01:38:45.340667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.131 spare 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.131 [2024-11-17 01:38:45.440545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:37.131 [2024-11-17 01:38:45.440627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:37.131 [2024-11-17 01:38:45.440752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:37.131 [2024-11-17 01:38:45.440942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:37.131 [2024-11-17 01:38:45.440981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:37.131 [2024-11-17 01:38:45.441150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.131 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.132 "name": "raid_bdev1", 00:18:37.132 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:37.132 "strip_size_kb": 0, 00:18:37.132 "state": "online", 00:18:37.132 "raid_level": "raid1", 00:18:37.132 "superblock": true, 00:18:37.132 "num_base_bdevs": 2, 00:18:37.132 "num_base_bdevs_discovered": 2, 00:18:37.132 "num_base_bdevs_operational": 2, 00:18:37.132 "base_bdevs_list": [ 00:18:37.132 { 00:18:37.132 "name": "spare", 00:18:37.132 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:37.132 "is_configured": true, 00:18:37.132 "data_offset": 256, 00:18:37.132 "data_size": 7936 00:18:37.132 }, 00:18:37.132 { 00:18:37.132 "name": "BaseBdev2", 00:18:37.132 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:37.132 "is_configured": true, 00:18:37.132 "data_offset": 256, 00:18:37.132 "data_size": 7936 00:18:37.132 } 00:18:37.132 ] 00:18:37.132 }' 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.132 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.702 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.702 "name": "raid_bdev1", 00:18:37.702 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:37.702 "strip_size_kb": 0, 00:18:37.702 "state": "online", 00:18:37.702 "raid_level": "raid1", 00:18:37.702 "superblock": true, 00:18:37.702 "num_base_bdevs": 2, 00:18:37.702 "num_base_bdevs_discovered": 2, 00:18:37.702 "num_base_bdevs_operational": 2, 00:18:37.702 "base_bdevs_list": [ 00:18:37.702 { 00:18:37.702 "name": "spare", 00:18:37.702 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:37.702 "is_configured": true, 00:18:37.702 "data_offset": 256, 00:18:37.702 "data_size": 7936 00:18:37.702 }, 00:18:37.702 { 00:18:37.702 "name": "BaseBdev2", 00:18:37.703 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:37.703 "is_configured": true, 00:18:37.703 "data_offset": 256, 00:18:37.703 "data_size": 7936 00:18:37.703 } 00:18:37.703 ] 00:18:37.703 }' 00:18:37.703 01:38:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.703 [2024-11-17 01:38:46.109111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.703 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.963 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.963 "name": "raid_bdev1", 00:18:37.963 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:37.963 "strip_size_kb": 0, 00:18:37.963 "state": "online", 00:18:37.963 "raid_level": "raid1", 00:18:37.963 "superblock": true, 00:18:37.963 "num_base_bdevs": 2, 00:18:37.963 "num_base_bdevs_discovered": 1, 00:18:37.963 "num_base_bdevs_operational": 1, 00:18:37.963 "base_bdevs_list": [ 00:18:37.963 { 00:18:37.963 "name": null, 00:18:37.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.963 "is_configured": false, 00:18:37.963 "data_offset": 0, 00:18:37.963 "data_size": 7936 00:18:37.963 }, 00:18:37.963 { 00:18:37.963 "name": "BaseBdev2", 00:18:37.963 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:37.963 "is_configured": true, 00:18:37.963 "data_offset": 256, 00:18:37.963 "data_size": 7936 00:18:37.963 } 00:18:37.963 ] 00:18:37.963 }' 00:18:37.963 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.963 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.223 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.223 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.223 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.223 [2024-11-17 01:38:46.576321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.223 [2024-11-17 01:38:46.576538] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:38.223 [2024-11-17 01:38:46.576616] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:38.223 [2024-11-17 01:38:46.576716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.223 [2024-11-17 01:38:46.589887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:38.223 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.223 01:38:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:38.223 [2024-11-17 01:38:46.591695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.164 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.424 "name": "raid_bdev1", 00:18:39.424 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:39.424 "strip_size_kb": 0, 00:18:39.424 "state": "online", 00:18:39.424 "raid_level": "raid1", 00:18:39.424 "superblock": true, 00:18:39.424 "num_base_bdevs": 2, 00:18:39.424 "num_base_bdevs_discovered": 2, 00:18:39.424 "num_base_bdevs_operational": 2, 00:18:39.424 "process": { 00:18:39.424 "type": "rebuild", 00:18:39.424 "target": "spare", 00:18:39.424 "progress": { 00:18:39.424 "blocks": 2560, 00:18:39.424 "percent": 32 00:18:39.424 } 00:18:39.424 }, 00:18:39.424 "base_bdevs_list": [ 00:18:39.424 { 00:18:39.424 "name": "spare", 00:18:39.424 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:39.424 "is_configured": true, 00:18:39.424 "data_offset": 256, 00:18:39.424 "data_size": 7936 00:18:39.424 }, 00:18:39.424 { 00:18:39.424 "name": "BaseBdev2", 00:18:39.424 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:39.424 "is_configured": true, 00:18:39.424 "data_offset": 256, 00:18:39.424 "data_size": 7936 00:18:39.424 } 00:18:39.424 ] 00:18:39.424 }' 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.424 [2024-11-17 01:38:47.755963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.424 [2024-11-17 01:38:47.796324] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:39.424 [2024-11-17 01:38:47.796458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.424 [2024-11-17 01:38:47.796492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.424 [2024-11-17 01:38:47.796528] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.424 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.425 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.425 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.425 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.425 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.425 "name": "raid_bdev1", 00:18:39.425 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:39.425 "strip_size_kb": 0, 00:18:39.425 "state": "online", 00:18:39.425 "raid_level": "raid1", 00:18:39.425 "superblock": true, 00:18:39.425 "num_base_bdevs": 2, 00:18:39.425 "num_base_bdevs_discovered": 1, 00:18:39.425 "num_base_bdevs_operational": 1, 00:18:39.425 "base_bdevs_list": [ 00:18:39.425 { 00:18:39.425 "name": null, 00:18:39.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.425 "is_configured": false, 00:18:39.425 "data_offset": 0, 00:18:39.425 "data_size": 7936 00:18:39.425 }, 00:18:39.425 { 00:18:39.425 "name": "BaseBdev2", 00:18:39.425 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:39.425 "is_configured": true, 00:18:39.425 "data_offset": 256, 00:18:39.425 "data_size": 7936 00:18:39.425 } 00:18:39.425 ] 00:18:39.425 }' 00:18:39.425 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.425 01:38:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 01:38:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.995 01:38:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.995 01:38:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 [2024-11-17 01:38:48.266788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.995 [2024-11-17 01:38:48.266906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.995 [2024-11-17 01:38:48.266948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:39.995 [2024-11-17 01:38:48.266978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.995 [2024-11-17 01:38:48.267223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.995 [2024-11-17 01:38:48.267282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.995 [2024-11-17 01:38:48.267355] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:39.995 [2024-11-17 01:38:48.267395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.995 [2024-11-17 01:38:48.267433] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:39.995 [2024-11-17 01:38:48.267471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.995 [2024-11-17 01:38:48.280868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:39.995 spare 00:18:39.995 01:38:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.995 [2024-11-17 01:38:48.282635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.995 01:38:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.953 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.953 "name": "raid_bdev1", 00:18:40.953 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:40.953 "strip_size_kb": 0, 00:18:40.953 "state": "online", 00:18:40.953 "raid_level": "raid1", 00:18:40.953 "superblock": true, 00:18:40.953 "num_base_bdevs": 2, 00:18:40.953 "num_base_bdevs_discovered": 2, 00:18:40.953 "num_base_bdevs_operational": 2, 00:18:40.953 "process": { 00:18:40.953 "type": "rebuild", 00:18:40.953 "target": "spare", 00:18:40.953 "progress": { 00:18:40.953 "blocks": 2560, 00:18:40.953 "percent": 32 00:18:40.953 } 00:18:40.953 }, 00:18:40.953 "base_bdevs_list": [ 00:18:40.953 { 00:18:40.953 "name": "spare", 00:18:40.953 "uuid": "1a3010e6-824d-5462-9f7e-993b2849d1b0", 00:18:40.953 "is_configured": true, 00:18:40.953 "data_offset": 256, 00:18:40.953 "data_size": 7936 00:18:40.953 }, 00:18:40.953 { 00:18:40.953 "name": "BaseBdev2", 00:18:40.953 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:40.953 "is_configured": true, 00:18:40.953 "data_offset": 256, 00:18:40.954 "data_size": 7936 00:18:40.954 } 00:18:40.954 ] 00:18:40.954 }' 00:18:40.954 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.954 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.954 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.213 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.213 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:41.213 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.213 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.213 [2024-11-17 01:38:49.446711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.213 [2024-11-17 01:38:49.487079] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:41.213 [2024-11-17 01:38:49.487200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.213 [2024-11-17 01:38:49.487238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.213 [2024-11-17 01:38:49.487258] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:41.213 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.213 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.214 "name": "raid_bdev1", 00:18:41.214 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:41.214 "strip_size_kb": 0, 00:18:41.214 "state": "online", 00:18:41.214 "raid_level": "raid1", 00:18:41.214 "superblock": true, 00:18:41.214 "num_base_bdevs": 2, 00:18:41.214 "num_base_bdevs_discovered": 1, 00:18:41.214 "num_base_bdevs_operational": 1, 00:18:41.214 "base_bdevs_list": [ 00:18:41.214 { 00:18:41.214 "name": null, 00:18:41.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.214 "is_configured": false, 00:18:41.214 "data_offset": 0, 00:18:41.214 "data_size": 7936 00:18:41.214 }, 00:18:41.214 { 00:18:41.214 "name": "BaseBdev2", 00:18:41.214 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:41.214 "is_configured": true, 00:18:41.214 "data_offset": 256, 00:18:41.214 "data_size": 7936 00:18:41.214 } 00:18:41.214 ] 00:18:41.214 }' 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.214 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.474 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.733 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.733 "name": "raid_bdev1", 00:18:41.733 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:41.733 "strip_size_kb": 0, 00:18:41.733 "state": "online", 00:18:41.733 "raid_level": "raid1", 00:18:41.733 "superblock": true, 00:18:41.733 "num_base_bdevs": 2, 00:18:41.733 "num_base_bdevs_discovered": 1, 00:18:41.733 "num_base_bdevs_operational": 1, 00:18:41.733 "base_bdevs_list": [ 00:18:41.733 { 00:18:41.733 "name": null, 00:18:41.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.733 "is_configured": false, 00:18:41.733 "data_offset": 0, 00:18:41.733 "data_size": 7936 00:18:41.733 }, 00:18:41.733 { 00:18:41.733 "name": "BaseBdev2", 00:18:41.733 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:41.733 "is_configured": true, 00:18:41.733 "data_offset": 256, 00:18:41.733 "data_size": 7936 00:18:41.733 } 00:18:41.733 ] 00:18:41.733 }' 00:18:41.733 01:38:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.733 [2024-11-17 01:38:50.072352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:41.733 [2024-11-17 01:38:50.072463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.733 [2024-11-17 01:38:50.072501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:41.733 [2024-11-17 01:38:50.072527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.733 [2024-11-17 01:38:50.072738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.733 [2024-11-17 01:38:50.072795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:41.733 [2024-11-17 01:38:50.072868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:41.733 [2024-11-17 01:38:50.072904] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:41.733 [2024-11-17 01:38:50.072944] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:41.733 [2024-11-17 01:38:50.072991] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:41.733 BaseBdev1 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.733 01:38:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.673 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.934 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.934 "name": "raid_bdev1", 00:18:42.934 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:42.934 "strip_size_kb": 0, 00:18:42.934 "state": "online", 00:18:42.934 "raid_level": "raid1", 00:18:42.934 "superblock": true, 00:18:42.934 "num_base_bdevs": 2, 00:18:42.934 "num_base_bdevs_discovered": 1, 00:18:42.934 "num_base_bdevs_operational": 1, 00:18:42.934 "base_bdevs_list": [ 00:18:42.934 { 00:18:42.934 "name": null, 00:18:42.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.934 "is_configured": false, 00:18:42.934 "data_offset": 0, 00:18:42.934 "data_size": 7936 00:18:42.934 }, 00:18:42.934 { 00:18:42.934 "name": "BaseBdev2", 00:18:42.934 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:42.934 "is_configured": true, 00:18:42.934 "data_offset": 256, 00:18:42.934 "data_size": 7936 00:18:42.934 } 00:18:42.934 ] 00:18:42.934 }' 00:18:42.934 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.934 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.194 "name": "raid_bdev1", 00:18:43.194 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:43.194 "strip_size_kb": 0, 00:18:43.194 "state": "online", 00:18:43.194 "raid_level": "raid1", 00:18:43.194 "superblock": true, 00:18:43.194 "num_base_bdevs": 2, 00:18:43.194 "num_base_bdevs_discovered": 1, 00:18:43.194 "num_base_bdevs_operational": 1, 00:18:43.194 "base_bdevs_list": [ 00:18:43.194 { 00:18:43.194 "name": null, 00:18:43.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.194 "is_configured": false, 00:18:43.194 "data_offset": 0, 00:18:43.194 "data_size": 7936 00:18:43.194 }, 00:18:43.194 { 00:18:43.194 "name": "BaseBdev2", 00:18:43.194 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:43.194 "is_configured": true, 00:18:43.194 "data_offset": 256, 00:18:43.194 "data_size": 7936 00:18:43.194 } 00:18:43.194 ] 00:18:43.194 }' 00:18:43.194 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.454 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.455 [2024-11-17 01:38:51.721523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.455 [2024-11-17 01:38:51.721687] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:43.455 [2024-11-17 01:38:51.721736] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:43.455 request: 00:18:43.455 { 00:18:43.455 "base_bdev": "BaseBdev1", 00:18:43.455 "raid_bdev": "raid_bdev1", 00:18:43.455 "method": "bdev_raid_add_base_bdev", 00:18:43.455 "req_id": 1 00:18:43.455 } 00:18:43.455 Got JSON-RPC error response 00:18:43.455 response: 00:18:43.455 { 00:18:43.455 "code": -22, 00:18:43.455 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:43.455 } 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.455 01:38:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:44.394 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.394 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.395 "name": "raid_bdev1", 00:18:44.395 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:44.395 "strip_size_kb": 0, 00:18:44.395 "state": "online", 00:18:44.395 "raid_level": "raid1", 00:18:44.395 "superblock": true, 00:18:44.395 "num_base_bdevs": 2, 00:18:44.395 "num_base_bdevs_discovered": 1, 00:18:44.395 "num_base_bdevs_operational": 1, 00:18:44.395 "base_bdevs_list": [ 00:18:44.395 { 00:18:44.395 "name": null, 00:18:44.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.395 "is_configured": false, 00:18:44.395 "data_offset": 0, 00:18:44.395 "data_size": 7936 00:18:44.395 }, 00:18:44.395 { 00:18:44.395 "name": "BaseBdev2", 00:18:44.395 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:44.395 "is_configured": true, 00:18:44.395 "data_offset": 256, 00:18:44.395 "data_size": 7936 00:18:44.395 } 00:18:44.395 ] 00:18:44.395 }' 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.395 01:38:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.965 "name": "raid_bdev1", 00:18:44.965 "uuid": "6f8af301-fd58-4530-a876-40d601b5b98c", 00:18:44.965 "strip_size_kb": 0, 00:18:44.965 "state": "online", 00:18:44.965 "raid_level": "raid1", 00:18:44.965 "superblock": true, 00:18:44.965 "num_base_bdevs": 2, 00:18:44.965 "num_base_bdevs_discovered": 1, 00:18:44.965 "num_base_bdevs_operational": 1, 00:18:44.965 "base_bdevs_list": [ 00:18:44.965 { 00:18:44.965 "name": null, 00:18:44.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.965 "is_configured": false, 00:18:44.965 "data_offset": 0, 00:18:44.965 "data_size": 7936 00:18:44.965 }, 00:18:44.965 { 00:18:44.965 "name": "BaseBdev2", 00:18:44.965 "uuid": "a41173e2-2fa1-5ddd-aa08-1fee09e30139", 00:18:44.965 "is_configured": true, 00:18:44.965 "data_offset": 256, 00:18:44.965 "data_size": 7936 00:18:44.965 } 00:18:44.965 ] 00:18:44.965 }' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87503 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87503 ']' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87503 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87503 00:18:44.965 killing process with pid 87503 00:18:44.965 Received shutdown signal, test time was about 60.000000 seconds 00:18:44.965 00:18:44.965 Latency(us) 00:18:44.965 [2024-11-17T01:38:53.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.965 [2024-11-17T01:38:53.425Z] =================================================================================================================== 00:18:44.965 [2024-11-17T01:38:53.425Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87503' 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87503 00:18:44.965 [2024-11-17 01:38:53.376117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.965 [2024-11-17 01:38:53.376218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.965 [2024-11-17 01:38:53.376257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.965 [2024-11-17 01:38:53.376267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:44.965 01:38:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87503 00:18:45.225 [2024-11-17 01:38:53.668801] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.607 01:38:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:46.607 00:18:46.607 real 0m19.756s 00:18:46.607 user 0m25.930s 00:18:46.607 sys 0m2.627s 00:18:46.607 ************************************ 00:18:46.607 END TEST raid_rebuild_test_sb_md_separate 00:18:46.607 ************************************ 00:18:46.607 01:38:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.607 01:38:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.607 01:38:54 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:46.607 01:38:54 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:46.607 01:38:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:46.607 01:38:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.607 01:38:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.607 ************************************ 00:18:46.607 START TEST raid_state_function_test_sb_md_interleaved 00:18:46.607 ************************************ 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:46.607 Process raid pid: 88189 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88189 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88189' 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88189 00:18:46.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88189 ']' 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.607 01:38:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.607 [2024-11-17 01:38:54.856336] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:46.607 [2024-11-17 01:38:54.856568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.607 [2024-11-17 01:38:55.019126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.867 [2024-11-17 01:38:55.126627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.867 [2024-11-17 01:38:55.323008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.868 [2024-11-17 01:38:55.323039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.437 [2024-11-17 01:38:55.684897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.437 [2024-11-17 01:38:55.684950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.437 [2024-11-17 01:38:55.684959] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.437 [2024-11-17 01:38:55.684967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.437 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.437 "name": "Existed_Raid", 00:18:47.437 "uuid": "ca04a91f-f7ca-4ef2-80c8-6a99d9bb009b", 00:18:47.437 "strip_size_kb": 0, 00:18:47.437 "state": "configuring", 00:18:47.437 "raid_level": "raid1", 00:18:47.437 "superblock": true, 00:18:47.437 "num_base_bdevs": 2, 00:18:47.437 "num_base_bdevs_discovered": 0, 00:18:47.437 "num_base_bdevs_operational": 2, 00:18:47.437 "base_bdevs_list": [ 00:18:47.437 { 00:18:47.437 "name": "BaseBdev1", 00:18:47.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.437 "is_configured": false, 00:18:47.437 "data_offset": 0, 00:18:47.437 "data_size": 0 00:18:47.437 }, 00:18:47.437 { 00:18:47.437 "name": "BaseBdev2", 00:18:47.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.438 "is_configured": false, 00:18:47.438 "data_offset": 0, 00:18:47.438 "data_size": 0 00:18:47.438 } 00:18:47.438 ] 00:18:47.438 }' 00:18:47.438 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.438 01:38:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.697 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:47.697 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.697 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.697 [2024-11-17 01:38:56.152011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.697 [2024-11-17 01:38:56.152092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.957 [2024-11-17 01:38:56.160005] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.957 [2024-11-17 01:38:56.160117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.957 [2024-11-17 01:38:56.160142] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.957 [2024-11-17 01:38:56.160167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.957 [2024-11-17 01:38:56.199042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.957 BaseBdev1 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:47.957 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.958 [ 00:18:47.958 { 00:18:47.958 "name": "BaseBdev1", 00:18:47.958 "aliases": [ 00:18:47.958 "a5ae1ec6-7122-48d3-bd97-cfcc5043098b" 00:18:47.958 ], 00:18:47.958 "product_name": "Malloc disk", 00:18:47.958 "block_size": 4128, 00:18:47.958 "num_blocks": 8192, 00:18:47.958 "uuid": "a5ae1ec6-7122-48d3-bd97-cfcc5043098b", 00:18:47.958 "md_size": 32, 00:18:47.958 "md_interleave": true, 00:18:47.958 "dif_type": 0, 00:18:47.958 "assigned_rate_limits": { 00:18:47.958 "rw_ios_per_sec": 0, 00:18:47.958 "rw_mbytes_per_sec": 0, 00:18:47.958 "r_mbytes_per_sec": 0, 00:18:47.958 "w_mbytes_per_sec": 0 00:18:47.958 }, 00:18:47.958 "claimed": true, 00:18:47.958 "claim_type": "exclusive_write", 00:18:47.958 "zoned": false, 00:18:47.958 "supported_io_types": { 00:18:47.958 "read": true, 00:18:47.958 "write": true, 00:18:47.958 "unmap": true, 00:18:47.958 "flush": true, 00:18:47.958 "reset": true, 00:18:47.958 "nvme_admin": false, 00:18:47.958 "nvme_io": false, 00:18:47.958 "nvme_io_md": false, 00:18:47.958 "write_zeroes": true, 00:18:47.958 "zcopy": true, 00:18:47.958 "get_zone_info": false, 00:18:47.958 "zone_management": false, 00:18:47.958 "zone_append": false, 00:18:47.958 "compare": false, 00:18:47.958 "compare_and_write": false, 00:18:47.958 "abort": true, 00:18:47.958 "seek_hole": false, 00:18:47.958 "seek_data": false, 00:18:47.958 "copy": true, 00:18:47.958 "nvme_iov_md": false 00:18:47.958 }, 00:18:47.958 "memory_domains": [ 00:18:47.958 { 00:18:47.958 "dma_device_id": "system", 00:18:47.958 "dma_device_type": 1 00:18:47.958 }, 00:18:47.958 { 00:18:47.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.958 "dma_device_type": 2 00:18:47.958 } 00:18:47.958 ], 00:18:47.958 "driver_specific": {} 00:18:47.958 } 00:18:47.958 ] 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.958 "name": "Existed_Raid", 00:18:47.958 "uuid": "ff0a0e77-e938-444d-be02-e2ec8ee5644a", 00:18:47.958 "strip_size_kb": 0, 00:18:47.958 "state": "configuring", 00:18:47.958 "raid_level": "raid1", 00:18:47.958 "superblock": true, 00:18:47.958 "num_base_bdevs": 2, 00:18:47.958 "num_base_bdevs_discovered": 1, 00:18:47.958 "num_base_bdevs_operational": 2, 00:18:47.958 "base_bdevs_list": [ 00:18:47.958 { 00:18:47.958 "name": "BaseBdev1", 00:18:47.958 "uuid": "a5ae1ec6-7122-48d3-bd97-cfcc5043098b", 00:18:47.958 "is_configured": true, 00:18:47.958 "data_offset": 256, 00:18:47.958 "data_size": 7936 00:18:47.958 }, 00:18:47.958 { 00:18:47.958 "name": "BaseBdev2", 00:18:47.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.958 "is_configured": false, 00:18:47.958 "data_offset": 0, 00:18:47.958 "data_size": 0 00:18:47.958 } 00:18:47.958 ] 00:18:47.958 }' 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.958 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.219 [2024-11-17 01:38:56.646338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:48.219 [2024-11-17 01:38:56.646379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.219 [2024-11-17 01:38:56.658372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:48.219 [2024-11-17 01:38:56.660108] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:48.219 [2024-11-17 01:38:56.660212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.219 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.479 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.479 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.479 "name": "Existed_Raid", 00:18:48.479 "uuid": "609461b1-17b0-419e-bb6f-32ba7a2aaf7f", 00:18:48.479 "strip_size_kb": 0, 00:18:48.479 "state": "configuring", 00:18:48.479 "raid_level": "raid1", 00:18:48.479 "superblock": true, 00:18:48.479 "num_base_bdevs": 2, 00:18:48.479 "num_base_bdevs_discovered": 1, 00:18:48.479 "num_base_bdevs_operational": 2, 00:18:48.479 "base_bdevs_list": [ 00:18:48.479 { 00:18:48.479 "name": "BaseBdev1", 00:18:48.479 "uuid": "a5ae1ec6-7122-48d3-bd97-cfcc5043098b", 00:18:48.479 "is_configured": true, 00:18:48.479 "data_offset": 256, 00:18:48.479 "data_size": 7936 00:18:48.479 }, 00:18:48.479 { 00:18:48.479 "name": "BaseBdev2", 00:18:48.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.479 "is_configured": false, 00:18:48.479 "data_offset": 0, 00:18:48.479 "data_size": 0 00:18:48.479 } 00:18:48.479 ] 00:18:48.479 }' 00:18:48.479 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.479 01:38:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:48.739 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.740 [2024-11-17 01:38:57.154726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.740 [2024-11-17 01:38:57.155016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:48.740 [2024-11-17 01:38:57.155071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:48.740 [2024-11-17 01:38:57.155206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:48.740 [2024-11-17 01:38:57.155312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:48.740 [2024-11-17 01:38:57.155349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:48.740 [2024-11-17 01:38:57.155438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.740 BaseBdev2 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.740 [ 00:18:48.740 { 00:18:48.740 "name": "BaseBdev2", 00:18:48.740 "aliases": [ 00:18:48.740 "6468775f-51dd-42a0-aa42-3a6563b480ba" 00:18:48.740 ], 00:18:48.740 "product_name": "Malloc disk", 00:18:48.740 "block_size": 4128, 00:18:48.740 "num_blocks": 8192, 00:18:48.740 "uuid": "6468775f-51dd-42a0-aa42-3a6563b480ba", 00:18:48.740 "md_size": 32, 00:18:48.740 "md_interleave": true, 00:18:48.740 "dif_type": 0, 00:18:48.740 "assigned_rate_limits": { 00:18:48.740 "rw_ios_per_sec": 0, 00:18:48.740 "rw_mbytes_per_sec": 0, 00:18:48.740 "r_mbytes_per_sec": 0, 00:18:48.740 "w_mbytes_per_sec": 0 00:18:48.740 }, 00:18:48.740 "claimed": true, 00:18:48.740 "claim_type": "exclusive_write", 00:18:48.740 "zoned": false, 00:18:48.740 "supported_io_types": { 00:18:48.740 "read": true, 00:18:48.740 "write": true, 00:18:48.740 "unmap": true, 00:18:48.740 "flush": true, 00:18:48.740 "reset": true, 00:18:48.740 "nvme_admin": false, 00:18:48.740 "nvme_io": false, 00:18:48.740 "nvme_io_md": false, 00:18:48.740 "write_zeroes": true, 00:18:48.740 "zcopy": true, 00:18:48.740 "get_zone_info": false, 00:18:48.740 "zone_management": false, 00:18:48.740 "zone_append": false, 00:18:48.740 "compare": false, 00:18:48.740 "compare_and_write": false, 00:18:48.740 "abort": true, 00:18:48.740 "seek_hole": false, 00:18:48.740 "seek_data": false, 00:18:48.740 "copy": true, 00:18:48.740 "nvme_iov_md": false 00:18:48.740 }, 00:18:48.740 "memory_domains": [ 00:18:48.740 { 00:18:48.740 "dma_device_id": "system", 00:18:48.740 "dma_device_type": 1 00:18:48.740 }, 00:18:48.740 { 00:18:48.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.740 "dma_device_type": 2 00:18:48.740 } 00:18:48.740 ], 00:18:48.740 "driver_specific": {} 00:18:48.740 } 00:18:48.740 ] 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.740 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.001 "name": "Existed_Raid", 00:18:49.001 "uuid": "609461b1-17b0-419e-bb6f-32ba7a2aaf7f", 00:18:49.001 "strip_size_kb": 0, 00:18:49.001 "state": "online", 00:18:49.001 "raid_level": "raid1", 00:18:49.001 "superblock": true, 00:18:49.001 "num_base_bdevs": 2, 00:18:49.001 "num_base_bdevs_discovered": 2, 00:18:49.001 "num_base_bdevs_operational": 2, 00:18:49.001 "base_bdevs_list": [ 00:18:49.001 { 00:18:49.001 "name": "BaseBdev1", 00:18:49.001 "uuid": "a5ae1ec6-7122-48d3-bd97-cfcc5043098b", 00:18:49.001 "is_configured": true, 00:18:49.001 "data_offset": 256, 00:18:49.001 "data_size": 7936 00:18:49.001 }, 00:18:49.001 { 00:18:49.001 "name": "BaseBdev2", 00:18:49.001 "uuid": "6468775f-51dd-42a0-aa42-3a6563b480ba", 00:18:49.001 "is_configured": true, 00:18:49.001 "data_offset": 256, 00:18:49.001 "data_size": 7936 00:18:49.001 } 00:18:49.001 ] 00:18:49.001 }' 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.001 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.261 [2024-11-17 01:38:57.646150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.261 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:49.261 "name": "Existed_Raid", 00:18:49.261 "aliases": [ 00:18:49.261 "609461b1-17b0-419e-bb6f-32ba7a2aaf7f" 00:18:49.261 ], 00:18:49.261 "product_name": "Raid Volume", 00:18:49.261 "block_size": 4128, 00:18:49.261 "num_blocks": 7936, 00:18:49.261 "uuid": "609461b1-17b0-419e-bb6f-32ba7a2aaf7f", 00:18:49.261 "md_size": 32, 00:18:49.261 "md_interleave": true, 00:18:49.261 "dif_type": 0, 00:18:49.261 "assigned_rate_limits": { 00:18:49.261 "rw_ios_per_sec": 0, 00:18:49.261 "rw_mbytes_per_sec": 0, 00:18:49.261 "r_mbytes_per_sec": 0, 00:18:49.261 "w_mbytes_per_sec": 0 00:18:49.261 }, 00:18:49.261 "claimed": false, 00:18:49.261 "zoned": false, 00:18:49.261 "supported_io_types": { 00:18:49.261 "read": true, 00:18:49.261 "write": true, 00:18:49.261 "unmap": false, 00:18:49.261 "flush": false, 00:18:49.261 "reset": true, 00:18:49.261 "nvme_admin": false, 00:18:49.261 "nvme_io": false, 00:18:49.261 "nvme_io_md": false, 00:18:49.262 "write_zeroes": true, 00:18:49.262 "zcopy": false, 00:18:49.262 "get_zone_info": false, 00:18:49.262 "zone_management": false, 00:18:49.262 "zone_append": false, 00:18:49.262 "compare": false, 00:18:49.262 "compare_and_write": false, 00:18:49.262 "abort": false, 00:18:49.262 "seek_hole": false, 00:18:49.262 "seek_data": false, 00:18:49.262 "copy": false, 00:18:49.262 "nvme_iov_md": false 00:18:49.262 }, 00:18:49.262 "memory_domains": [ 00:18:49.262 { 00:18:49.262 "dma_device_id": "system", 00:18:49.262 "dma_device_type": 1 00:18:49.262 }, 00:18:49.262 { 00:18:49.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.262 "dma_device_type": 2 00:18:49.262 }, 00:18:49.262 { 00:18:49.262 "dma_device_id": "system", 00:18:49.262 "dma_device_type": 1 00:18:49.262 }, 00:18:49.262 { 00:18:49.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.262 "dma_device_type": 2 00:18:49.262 } 00:18:49.262 ], 00:18:49.262 "driver_specific": { 00:18:49.262 "raid": { 00:18:49.262 "uuid": "609461b1-17b0-419e-bb6f-32ba7a2aaf7f", 00:18:49.262 "strip_size_kb": 0, 00:18:49.262 "state": "online", 00:18:49.262 "raid_level": "raid1", 00:18:49.262 "superblock": true, 00:18:49.262 "num_base_bdevs": 2, 00:18:49.262 "num_base_bdevs_discovered": 2, 00:18:49.262 "num_base_bdevs_operational": 2, 00:18:49.262 "base_bdevs_list": [ 00:18:49.262 { 00:18:49.262 "name": "BaseBdev1", 00:18:49.262 "uuid": "a5ae1ec6-7122-48d3-bd97-cfcc5043098b", 00:18:49.262 "is_configured": true, 00:18:49.262 "data_offset": 256, 00:18:49.262 "data_size": 7936 00:18:49.262 }, 00:18:49.262 { 00:18:49.262 "name": "BaseBdev2", 00:18:49.262 "uuid": "6468775f-51dd-42a0-aa42-3a6563b480ba", 00:18:49.262 "is_configured": true, 00:18:49.262 "data_offset": 256, 00:18:49.262 "data_size": 7936 00:18:49.262 } 00:18:49.262 ] 00:18:49.262 } 00:18:49.262 } 00:18:49.262 }' 00:18:49.262 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.262 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:49.262 BaseBdev2' 00:18:49.262 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.522 [2024-11-17 01:38:57.873599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.522 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.523 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.523 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.523 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.523 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.783 01:38:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.783 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.783 "name": "Existed_Raid", 00:18:49.783 "uuid": "609461b1-17b0-419e-bb6f-32ba7a2aaf7f", 00:18:49.783 "strip_size_kb": 0, 00:18:49.783 "state": "online", 00:18:49.783 "raid_level": "raid1", 00:18:49.783 "superblock": true, 00:18:49.783 "num_base_bdevs": 2, 00:18:49.783 "num_base_bdevs_discovered": 1, 00:18:49.783 "num_base_bdevs_operational": 1, 00:18:49.783 "base_bdevs_list": [ 00:18:49.783 { 00:18:49.783 "name": null, 00:18:49.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.783 "is_configured": false, 00:18:49.783 "data_offset": 0, 00:18:49.783 "data_size": 7936 00:18:49.783 }, 00:18:49.783 { 00:18:49.783 "name": "BaseBdev2", 00:18:49.783 "uuid": "6468775f-51dd-42a0-aa42-3a6563b480ba", 00:18:49.783 "is_configured": true, 00:18:49.783 "data_offset": 256, 00:18:49.783 "data_size": 7936 00:18:49.783 } 00:18:49.783 ] 00:18:49.783 }' 00:18:49.783 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.783 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.043 [2024-11-17 01:38:58.384963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:50.043 [2024-11-17 01:38:58.385134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.043 [2024-11-17 01:38:58.473351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.043 [2024-11-17 01:38:58.473468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.043 [2024-11-17 01:38:58.473507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.043 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88189 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88189 ']' 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88189 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88189 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.304 killing process with pid 88189 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88189' 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88189 00:18:50.304 [2024-11-17 01:38:58.571482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.304 01:38:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88189 00:18:50.304 [2024-11-17 01:38:58.587165] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.259 ************************************ 00:18:51.259 END TEST raid_state_function_test_sb_md_interleaved 00:18:51.259 ************************************ 00:18:51.259 01:38:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:51.259 00:18:51.259 real 0m4.865s 00:18:51.259 user 0m7.016s 00:18:51.259 sys 0m0.884s 00:18:51.259 01:38:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.259 01:38:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.259 01:38:59 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:51.259 01:38:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:51.259 01:38:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.259 01:38:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.259 ************************************ 00:18:51.259 START TEST raid_superblock_test_md_interleaved 00:18:51.259 ************************************ 00:18:51.259 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:51.259 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:51.259 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:51.259 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:51.259 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:51.259 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88436 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88436 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88436 ']' 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.260 01:38:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.520 [2024-11-17 01:38:59.799849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:51.520 [2024-11-17 01:38:59.800045] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88436 ] 00:18:51.520 [2024-11-17 01:38:59.978449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.779 [2024-11-17 01:39:00.085450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.038 [2024-11-17 01:39:00.284947] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.038 [2024-11-17 01:39:00.285078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:52.298 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 malloc1 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 [2024-11-17 01:39:00.667675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:52.299 [2024-11-17 01:39:00.667849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.299 [2024-11-17 01:39:00.667903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:52.299 [2024-11-17 01:39:00.667958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.299 [2024-11-17 01:39:00.669725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.299 [2024-11-17 01:39:00.669779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:52.299 pt1 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 malloc2 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 [2024-11-17 01:39:00.719997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:52.299 [2024-11-17 01:39:00.720127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.299 [2024-11-17 01:39:00.720163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:52.299 [2024-11-17 01:39:00.720190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.299 [2024-11-17 01:39:00.721953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.299 [2024-11-17 01:39:00.722015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:52.299 pt2 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 [2024-11-17 01:39:00.732017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:52.299 [2024-11-17 01:39:00.733749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:52.299 [2024-11-17 01:39:00.733983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:52.299 [2024-11-17 01:39:00.734029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:52.299 [2024-11-17 01:39:00.734128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:52.299 [2024-11-17 01:39:00.734229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:52.299 [2024-11-17 01:39:00.734270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:52.299 [2024-11-17 01:39:00.734366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.299 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.559 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.559 "name": "raid_bdev1", 00:18:52.559 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:52.559 "strip_size_kb": 0, 00:18:52.559 "state": "online", 00:18:52.559 "raid_level": "raid1", 00:18:52.559 "superblock": true, 00:18:52.559 "num_base_bdevs": 2, 00:18:52.559 "num_base_bdevs_discovered": 2, 00:18:52.559 "num_base_bdevs_operational": 2, 00:18:52.559 "base_bdevs_list": [ 00:18:52.559 { 00:18:52.559 "name": "pt1", 00:18:52.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:52.559 "is_configured": true, 00:18:52.559 "data_offset": 256, 00:18:52.559 "data_size": 7936 00:18:52.559 }, 00:18:52.559 { 00:18:52.559 "name": "pt2", 00:18:52.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.559 "is_configured": true, 00:18:52.559 "data_offset": 256, 00:18:52.559 "data_size": 7936 00:18:52.559 } 00:18:52.559 ] 00:18:52.559 }' 00:18:52.559 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.559 01:39:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.820 [2024-11-17 01:39:01.203581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:52.820 "name": "raid_bdev1", 00:18:52.820 "aliases": [ 00:18:52.820 "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058" 00:18:52.820 ], 00:18:52.820 "product_name": "Raid Volume", 00:18:52.820 "block_size": 4128, 00:18:52.820 "num_blocks": 7936, 00:18:52.820 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:52.820 "md_size": 32, 00:18:52.820 "md_interleave": true, 00:18:52.820 "dif_type": 0, 00:18:52.820 "assigned_rate_limits": { 00:18:52.820 "rw_ios_per_sec": 0, 00:18:52.820 "rw_mbytes_per_sec": 0, 00:18:52.820 "r_mbytes_per_sec": 0, 00:18:52.820 "w_mbytes_per_sec": 0 00:18:52.820 }, 00:18:52.820 "claimed": false, 00:18:52.820 "zoned": false, 00:18:52.820 "supported_io_types": { 00:18:52.820 "read": true, 00:18:52.820 "write": true, 00:18:52.820 "unmap": false, 00:18:52.820 "flush": false, 00:18:52.820 "reset": true, 00:18:52.820 "nvme_admin": false, 00:18:52.820 "nvme_io": false, 00:18:52.820 "nvme_io_md": false, 00:18:52.820 "write_zeroes": true, 00:18:52.820 "zcopy": false, 00:18:52.820 "get_zone_info": false, 00:18:52.820 "zone_management": false, 00:18:52.820 "zone_append": false, 00:18:52.820 "compare": false, 00:18:52.820 "compare_and_write": false, 00:18:52.820 "abort": false, 00:18:52.820 "seek_hole": false, 00:18:52.820 "seek_data": false, 00:18:52.820 "copy": false, 00:18:52.820 "nvme_iov_md": false 00:18:52.820 }, 00:18:52.820 "memory_domains": [ 00:18:52.820 { 00:18:52.820 "dma_device_id": "system", 00:18:52.820 "dma_device_type": 1 00:18:52.820 }, 00:18:52.820 { 00:18:52.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.820 "dma_device_type": 2 00:18:52.820 }, 00:18:52.820 { 00:18:52.820 "dma_device_id": "system", 00:18:52.820 "dma_device_type": 1 00:18:52.820 }, 00:18:52.820 { 00:18:52.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.820 "dma_device_type": 2 00:18:52.820 } 00:18:52.820 ], 00:18:52.820 "driver_specific": { 00:18:52.820 "raid": { 00:18:52.820 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:52.820 "strip_size_kb": 0, 00:18:52.820 "state": "online", 00:18:52.820 "raid_level": "raid1", 00:18:52.820 "superblock": true, 00:18:52.820 "num_base_bdevs": 2, 00:18:52.820 "num_base_bdevs_discovered": 2, 00:18:52.820 "num_base_bdevs_operational": 2, 00:18:52.820 "base_bdevs_list": [ 00:18:52.820 { 00:18:52.820 "name": "pt1", 00:18:52.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:52.820 "is_configured": true, 00:18:52.820 "data_offset": 256, 00:18:52.820 "data_size": 7936 00:18:52.820 }, 00:18:52.820 { 00:18:52.820 "name": "pt2", 00:18:52.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:52.820 "is_configured": true, 00:18:52.820 "data_offset": 256, 00:18:52.820 "data_size": 7936 00:18:52.820 } 00:18:52.820 ] 00:18:52.820 } 00:18:52.820 } 00:18:52.820 }' 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:52.820 pt2' 00:18:52.820 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.081 [2024-11-17 01:39:01.431300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058 ']' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.081 [2024-11-17 01:39:01.474983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.081 [2024-11-17 01:39:01.475050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.081 [2024-11-17 01:39:01.475147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.081 [2024-11-17 01:39:01.475206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.081 [2024-11-17 01:39:01.475218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.081 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.342 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.342 [2024-11-17 01:39:01.614777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:53.342 [2024-11-17 01:39:01.616535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:53.342 [2024-11-17 01:39:01.616645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:53.343 [2024-11-17 01:39:01.616741] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:53.343 [2024-11-17 01:39:01.616831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.343 [2024-11-17 01:39:01.616853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:53.343 request: 00:18:53.343 { 00:18:53.343 "name": "raid_bdev1", 00:18:53.343 "raid_level": "raid1", 00:18:53.343 "base_bdevs": [ 00:18:53.343 "malloc1", 00:18:53.343 "malloc2" 00:18:53.343 ], 00:18:53.343 "superblock": false, 00:18:53.343 "method": "bdev_raid_create", 00:18:53.343 "req_id": 1 00:18:53.343 } 00:18:53.343 Got JSON-RPC error response 00:18:53.343 response: 00:18:53.343 { 00:18:53.343 "code": -17, 00:18:53.343 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:53.343 } 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.343 [2024-11-17 01:39:01.678632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:53.343 [2024-11-17 01:39:01.678738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.343 [2024-11-17 01:39:01.678768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:53.343 [2024-11-17 01:39:01.678814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.343 [2024-11-17 01:39:01.680614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.343 [2024-11-17 01:39:01.680684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:53.343 [2024-11-17 01:39:01.680741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:53.343 [2024-11-17 01:39:01.680840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:53.343 pt1 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.343 "name": "raid_bdev1", 00:18:53.343 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:53.343 "strip_size_kb": 0, 00:18:53.343 "state": "configuring", 00:18:53.343 "raid_level": "raid1", 00:18:53.343 "superblock": true, 00:18:53.343 "num_base_bdevs": 2, 00:18:53.343 "num_base_bdevs_discovered": 1, 00:18:53.343 "num_base_bdevs_operational": 2, 00:18:53.343 "base_bdevs_list": [ 00:18:53.343 { 00:18:53.343 "name": "pt1", 00:18:53.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.343 "is_configured": true, 00:18:53.343 "data_offset": 256, 00:18:53.343 "data_size": 7936 00:18:53.343 }, 00:18:53.343 { 00:18:53.343 "name": null, 00:18:53.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.343 "is_configured": false, 00:18:53.343 "data_offset": 256, 00:18:53.343 "data_size": 7936 00:18:53.343 } 00:18:53.343 ] 00:18:53.343 }' 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.343 01:39:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.914 [2024-11-17 01:39:02.121859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:53.914 [2024-11-17 01:39:02.121967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.914 [2024-11-17 01:39:02.122003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:53.914 [2024-11-17 01:39:02.122033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.914 [2024-11-17 01:39:02.122174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.914 [2024-11-17 01:39:02.122216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:53.914 [2024-11-17 01:39:02.122274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:53.914 [2024-11-17 01:39:02.122320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:53.914 [2024-11-17 01:39:02.122423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:53.914 [2024-11-17 01:39:02.122437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:53.914 [2024-11-17 01:39:02.122504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:53.914 [2024-11-17 01:39:02.122572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:53.914 [2024-11-17 01:39:02.122580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:53.914 [2024-11-17 01:39:02.122633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.914 pt2 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.914 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.914 "name": "raid_bdev1", 00:18:53.914 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:53.914 "strip_size_kb": 0, 00:18:53.914 "state": "online", 00:18:53.914 "raid_level": "raid1", 00:18:53.914 "superblock": true, 00:18:53.914 "num_base_bdevs": 2, 00:18:53.914 "num_base_bdevs_discovered": 2, 00:18:53.914 "num_base_bdevs_operational": 2, 00:18:53.914 "base_bdevs_list": [ 00:18:53.914 { 00:18:53.914 "name": "pt1", 00:18:53.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:53.914 "is_configured": true, 00:18:53.914 "data_offset": 256, 00:18:53.915 "data_size": 7936 00:18:53.915 }, 00:18:53.915 { 00:18:53.915 "name": "pt2", 00:18:53.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 256, 00:18:53.915 "data_size": 7936 00:18:53.915 } 00:18:53.915 ] 00:18:53.915 }' 00:18:53.915 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.915 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.175 [2024-11-17 01:39:02.549385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.175 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.175 "name": "raid_bdev1", 00:18:54.175 "aliases": [ 00:18:54.175 "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058" 00:18:54.175 ], 00:18:54.175 "product_name": "Raid Volume", 00:18:54.175 "block_size": 4128, 00:18:54.175 "num_blocks": 7936, 00:18:54.175 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:54.175 "md_size": 32, 00:18:54.175 "md_interleave": true, 00:18:54.175 "dif_type": 0, 00:18:54.175 "assigned_rate_limits": { 00:18:54.175 "rw_ios_per_sec": 0, 00:18:54.175 "rw_mbytes_per_sec": 0, 00:18:54.175 "r_mbytes_per_sec": 0, 00:18:54.175 "w_mbytes_per_sec": 0 00:18:54.175 }, 00:18:54.176 "claimed": false, 00:18:54.176 "zoned": false, 00:18:54.176 "supported_io_types": { 00:18:54.176 "read": true, 00:18:54.176 "write": true, 00:18:54.176 "unmap": false, 00:18:54.176 "flush": false, 00:18:54.176 "reset": true, 00:18:54.176 "nvme_admin": false, 00:18:54.176 "nvme_io": false, 00:18:54.176 "nvme_io_md": false, 00:18:54.176 "write_zeroes": true, 00:18:54.176 "zcopy": false, 00:18:54.176 "get_zone_info": false, 00:18:54.176 "zone_management": false, 00:18:54.176 "zone_append": false, 00:18:54.176 "compare": false, 00:18:54.176 "compare_and_write": false, 00:18:54.176 "abort": false, 00:18:54.176 "seek_hole": false, 00:18:54.176 "seek_data": false, 00:18:54.176 "copy": false, 00:18:54.176 "nvme_iov_md": false 00:18:54.176 }, 00:18:54.176 "memory_domains": [ 00:18:54.176 { 00:18:54.176 "dma_device_id": "system", 00:18:54.176 "dma_device_type": 1 00:18:54.176 }, 00:18:54.176 { 00:18:54.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.176 "dma_device_type": 2 00:18:54.176 }, 00:18:54.176 { 00:18:54.176 "dma_device_id": "system", 00:18:54.176 "dma_device_type": 1 00:18:54.176 }, 00:18:54.176 { 00:18:54.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.176 "dma_device_type": 2 00:18:54.176 } 00:18:54.176 ], 00:18:54.176 "driver_specific": { 00:18:54.176 "raid": { 00:18:54.176 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:54.176 "strip_size_kb": 0, 00:18:54.176 "state": "online", 00:18:54.176 "raid_level": "raid1", 00:18:54.176 "superblock": true, 00:18:54.176 "num_base_bdevs": 2, 00:18:54.176 "num_base_bdevs_discovered": 2, 00:18:54.176 "num_base_bdevs_operational": 2, 00:18:54.176 "base_bdevs_list": [ 00:18:54.176 { 00:18:54.176 "name": "pt1", 00:18:54.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.176 "is_configured": true, 00:18:54.176 "data_offset": 256, 00:18:54.176 "data_size": 7936 00:18:54.176 }, 00:18:54.176 { 00:18:54.176 "name": "pt2", 00:18:54.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.176 "is_configured": true, 00:18:54.176 "data_offset": 256, 00:18:54.176 "data_size": 7936 00:18:54.176 } 00:18:54.176 ] 00:18:54.176 } 00:18:54.176 } 00:18:54.176 }' 00:18:54.176 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.436 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:54.436 pt2' 00:18:54.436 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.436 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:54.436 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.436 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.437 [2024-11-17 01:39:02.781027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058 '!=' 6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058 ']' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.437 [2024-11-17 01:39:02.828707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.437 "name": "raid_bdev1", 00:18:54.437 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:54.437 "strip_size_kb": 0, 00:18:54.437 "state": "online", 00:18:54.437 "raid_level": "raid1", 00:18:54.437 "superblock": true, 00:18:54.437 "num_base_bdevs": 2, 00:18:54.437 "num_base_bdevs_discovered": 1, 00:18:54.437 "num_base_bdevs_operational": 1, 00:18:54.437 "base_bdevs_list": [ 00:18:54.437 { 00:18:54.437 "name": null, 00:18:54.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.437 "is_configured": false, 00:18:54.437 "data_offset": 0, 00:18:54.437 "data_size": 7936 00:18:54.437 }, 00:18:54.437 { 00:18:54.437 "name": "pt2", 00:18:54.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.437 "is_configured": true, 00:18:54.437 "data_offset": 256, 00:18:54.437 "data_size": 7936 00:18:54.437 } 00:18:54.437 ] 00:18:54.437 }' 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.437 01:39:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 [2024-11-17 01:39:03.259931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.008 [2024-11-17 01:39:03.260004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.008 [2024-11-17 01:39:03.260088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.008 [2024-11-17 01:39:03.260156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.008 [2024-11-17 01:39:03.260199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 [2024-11-17 01:39:03.335829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.008 [2024-11-17 01:39:03.335881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.008 [2024-11-17 01:39:03.335896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:55.008 [2024-11-17 01:39:03.335907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.008 [2024-11-17 01:39:03.337944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.008 [2024-11-17 01:39:03.338023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.008 [2024-11-17 01:39:03.338075] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:55.008 [2024-11-17 01:39:03.338123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.008 [2024-11-17 01:39:03.338186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:55.008 [2024-11-17 01:39:03.338197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:55.008 [2024-11-17 01:39:03.338279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:55.008 [2024-11-17 01:39:03.338340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:55.008 [2024-11-17 01:39:03.338347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:55.008 [2024-11-17 01:39:03.338402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.008 pt2 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.008 "name": "raid_bdev1", 00:18:55.008 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:55.008 "strip_size_kb": 0, 00:18:55.008 "state": "online", 00:18:55.008 "raid_level": "raid1", 00:18:55.008 "superblock": true, 00:18:55.008 "num_base_bdevs": 2, 00:18:55.008 "num_base_bdevs_discovered": 1, 00:18:55.008 "num_base_bdevs_operational": 1, 00:18:55.008 "base_bdevs_list": [ 00:18:55.008 { 00:18:55.008 "name": null, 00:18:55.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.008 "is_configured": false, 00:18:55.008 "data_offset": 256, 00:18:55.008 "data_size": 7936 00:18:55.008 }, 00:18:55.008 { 00:18:55.008 "name": "pt2", 00:18:55.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.008 "is_configured": true, 00:18:55.008 "data_offset": 256, 00:18:55.008 "data_size": 7936 00:18:55.008 } 00:18:55.008 ] 00:18:55.008 }' 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.008 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.580 [2024-11-17 01:39:03.735122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.580 [2024-11-17 01:39:03.735193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.580 [2024-11-17 01:39:03.735267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.580 [2024-11-17 01:39:03.735337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.580 [2024-11-17 01:39:03.735368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.580 [2024-11-17 01:39:03.799049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.580 [2024-11-17 01:39:03.799160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.580 [2024-11-17 01:39:03.799206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:55.580 [2024-11-17 01:39:03.799235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.580 [2024-11-17 01:39:03.800984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.580 [2024-11-17 01:39:03.801064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.580 [2024-11-17 01:39:03.801128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:55.580 [2024-11-17 01:39:03.801188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.580 [2024-11-17 01:39:03.801296] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:55.580 [2024-11-17 01:39:03.801344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.580 [2024-11-17 01:39:03.801377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:55.580 [2024-11-17 01:39:03.801462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.580 [2024-11-17 01:39:03.801552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:55.580 [2024-11-17 01:39:03.801587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:55.580 [2024-11-17 01:39:03.801656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:55.580 [2024-11-17 01:39:03.801743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:55.580 [2024-11-17 01:39:03.801812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:55.580 [2024-11-17 01:39:03.801911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.580 pt1 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.580 "name": "raid_bdev1", 00:18:55.580 "uuid": "6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058", 00:18:55.580 "strip_size_kb": 0, 00:18:55.580 "state": "online", 00:18:55.580 "raid_level": "raid1", 00:18:55.580 "superblock": true, 00:18:55.580 "num_base_bdevs": 2, 00:18:55.580 "num_base_bdevs_discovered": 1, 00:18:55.580 "num_base_bdevs_operational": 1, 00:18:55.580 "base_bdevs_list": [ 00:18:55.580 { 00:18:55.580 "name": null, 00:18:55.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.580 "is_configured": false, 00:18:55.580 "data_offset": 256, 00:18:55.580 "data_size": 7936 00:18:55.580 }, 00:18:55.580 { 00:18:55.580 "name": "pt2", 00:18:55.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.580 "is_configured": true, 00:18:55.580 "data_offset": 256, 00:18:55.580 "data_size": 7936 00:18:55.580 } 00:18:55.580 ] 00:18:55.580 }' 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.580 01:39:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.841 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:55.841 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:55.841 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.841 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.841 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.102 [2024-11-17 01:39:04.326347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058 '!=' 6bfcc0a6-1c2b-46a6-ab76-c2f56d6c1058 ']' 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88436 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88436 ']' 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88436 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88436 00:18:56.102 killing process with pid 88436 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88436' 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88436 00:18:56.102 [2024-11-17 01:39:04.400727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.102 [2024-11-17 01:39:04.400808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.102 [2024-11-17 01:39:04.400847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.102 [2024-11-17 01:39:04.400859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:56.102 01:39:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88436 00:18:56.365 [2024-11-17 01:39:04.596394] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.334 ************************************ 00:18:57.334 END TEST raid_superblock_test_md_interleaved 00:18:57.334 ************************************ 00:18:57.334 01:39:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:57.334 00:18:57.334 real 0m5.924s 00:18:57.335 user 0m9.010s 00:18:57.335 sys 0m1.092s 00:18:57.335 01:39:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.335 01:39:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.335 01:39:05 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:57.335 01:39:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:57.335 01:39:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.335 01:39:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.335 ************************************ 00:18:57.335 START TEST raid_rebuild_test_sb_md_interleaved 00:18:57.335 ************************************ 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88765 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88765 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88765 ']' 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.335 01:39:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.595 [2024-11-17 01:39:05.809854] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:57.595 [2024-11-17 01:39:05.810036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:57.595 Zero copy mechanism will not be used. 00:18:57.595 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88765 ] 00:18:57.595 [2024-11-17 01:39:05.989686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.856 [2024-11-17 01:39:06.095321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.856 [2024-11-17 01:39:06.290391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.856 [2024-11-17 01:39:06.290485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 BaseBdev1_malloc 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 [2024-11-17 01:39:06.666688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:58.427 [2024-11-17 01:39:06.666753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.427 [2024-11-17 01:39:06.666792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:58.427 [2024-11-17 01:39:06.666804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.427 [2024-11-17 01:39:06.668587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.427 [2024-11-17 01:39:06.668630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:58.427 BaseBdev1 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 BaseBdev2_malloc 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 [2024-11-17 01:39:06.721148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:58.427 [2024-11-17 01:39:06.721213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.427 [2024-11-17 01:39:06.721231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:58.427 [2024-11-17 01:39:06.721244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.427 [2024-11-17 01:39:06.723001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.427 [2024-11-17 01:39:06.723036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:58.427 BaseBdev2 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 spare_malloc 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 spare_delay 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 [2024-11-17 01:39:06.797467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:58.427 [2024-11-17 01:39:06.797526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.427 [2024-11-17 01:39:06.797543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:58.427 [2024-11-17 01:39:06.797553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.427 [2024-11-17 01:39:06.799311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.427 [2024-11-17 01:39:06.799349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:58.427 spare 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.427 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 [2024-11-17 01:39:06.809483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.427 [2024-11-17 01:39:06.811222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.427 [2024-11-17 01:39:06.811411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:58.427 [2024-11-17 01:39:06.811425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:58.427 [2024-11-17 01:39:06.811509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:58.428 [2024-11-17 01:39:06.811572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:58.428 [2024-11-17 01:39:06.811579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:58.428 [2024-11-17 01:39:06.811640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.428 "name": "raid_bdev1", 00:18:58.428 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:18:58.428 "strip_size_kb": 0, 00:18:58.428 "state": "online", 00:18:58.428 "raid_level": "raid1", 00:18:58.428 "superblock": true, 00:18:58.428 "num_base_bdevs": 2, 00:18:58.428 "num_base_bdevs_discovered": 2, 00:18:58.428 "num_base_bdevs_operational": 2, 00:18:58.428 "base_bdevs_list": [ 00:18:58.428 { 00:18:58.428 "name": "BaseBdev1", 00:18:58.428 "uuid": "486f4f55-6614-5cd4-ad5b-c9127608f11c", 00:18:58.428 "is_configured": true, 00:18:58.428 "data_offset": 256, 00:18:58.428 "data_size": 7936 00:18:58.428 }, 00:18:58.428 { 00:18:58.428 "name": "BaseBdev2", 00:18:58.428 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:18:58.428 "is_configured": true, 00:18:58.428 "data_offset": 256, 00:18:58.428 "data_size": 7936 00:18:58.428 } 00:18:58.428 ] 00:18:58.428 }' 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.428 01:39:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.998 [2024-11-17 01:39:07.228997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.998 [2024-11-17 01:39:07.320564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.998 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.998 "name": "raid_bdev1", 00:18:58.998 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:18:58.998 "strip_size_kb": 0, 00:18:58.998 "state": "online", 00:18:58.998 "raid_level": "raid1", 00:18:58.998 "superblock": true, 00:18:58.998 "num_base_bdevs": 2, 00:18:58.998 "num_base_bdevs_discovered": 1, 00:18:58.998 "num_base_bdevs_operational": 1, 00:18:58.998 "base_bdevs_list": [ 00:18:58.998 { 00:18:58.999 "name": null, 00:18:58.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.999 "is_configured": false, 00:18:58.999 "data_offset": 0, 00:18:58.999 "data_size": 7936 00:18:58.999 }, 00:18:58.999 { 00:18:58.999 "name": "BaseBdev2", 00:18:58.999 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:18:58.999 "is_configured": true, 00:18:58.999 "data_offset": 256, 00:18:58.999 "data_size": 7936 00:18:58.999 } 00:18:58.999 ] 00:18:58.999 }' 00:18:58.999 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.999 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.569 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:59.569 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.569 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.569 [2024-11-17 01:39:07.743847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:59.569 [2024-11-17 01:39:07.758547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:59.569 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.569 01:39:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:59.569 [2024-11-17 01:39:07.760367] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:00.508 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.508 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.508 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.508 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.508 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.509 "name": "raid_bdev1", 00:19:00.509 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:00.509 "strip_size_kb": 0, 00:19:00.509 "state": "online", 00:19:00.509 "raid_level": "raid1", 00:19:00.509 "superblock": true, 00:19:00.509 "num_base_bdevs": 2, 00:19:00.509 "num_base_bdevs_discovered": 2, 00:19:00.509 "num_base_bdevs_operational": 2, 00:19:00.509 "process": { 00:19:00.509 "type": "rebuild", 00:19:00.509 "target": "spare", 00:19:00.509 "progress": { 00:19:00.509 "blocks": 2560, 00:19:00.509 "percent": 32 00:19:00.509 } 00:19:00.509 }, 00:19:00.509 "base_bdevs_list": [ 00:19:00.509 { 00:19:00.509 "name": "spare", 00:19:00.509 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:00.509 "is_configured": true, 00:19:00.509 "data_offset": 256, 00:19:00.509 "data_size": 7936 00:19:00.509 }, 00:19:00.509 { 00:19:00.509 "name": "BaseBdev2", 00:19:00.509 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:00.509 "is_configured": true, 00:19:00.509 "data_offset": 256, 00:19:00.509 "data_size": 7936 00:19:00.509 } 00:19:00.509 ] 00:19:00.509 }' 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.509 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.509 [2024-11-17 01:39:08.920092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.509 [2024-11-17 01:39:08.965018] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:00.509 [2024-11-17 01:39:08.965077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.509 [2024-11-17 01:39:08.965092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.509 [2024-11-17 01:39:08.965104] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:00.770 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.770 01:39:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.770 "name": "raid_bdev1", 00:19:00.770 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:00.770 "strip_size_kb": 0, 00:19:00.770 "state": "online", 00:19:00.770 "raid_level": "raid1", 00:19:00.770 "superblock": true, 00:19:00.770 "num_base_bdevs": 2, 00:19:00.770 "num_base_bdevs_discovered": 1, 00:19:00.770 "num_base_bdevs_operational": 1, 00:19:00.770 "base_bdevs_list": [ 00:19:00.770 { 00:19:00.770 "name": null, 00:19:00.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.770 "is_configured": false, 00:19:00.770 "data_offset": 0, 00:19:00.770 "data_size": 7936 00:19:00.770 }, 00:19:00.770 { 00:19:00.770 "name": "BaseBdev2", 00:19:00.770 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:00.770 "is_configured": true, 00:19:00.770 "data_offset": 256, 00:19:00.770 "data_size": 7936 00:19:00.770 } 00:19:00.770 ] 00:19:00.770 }' 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.770 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.291 "name": "raid_bdev1", 00:19:01.291 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:01.291 "strip_size_kb": 0, 00:19:01.291 "state": "online", 00:19:01.291 "raid_level": "raid1", 00:19:01.291 "superblock": true, 00:19:01.291 "num_base_bdevs": 2, 00:19:01.291 "num_base_bdevs_discovered": 1, 00:19:01.291 "num_base_bdevs_operational": 1, 00:19:01.291 "base_bdevs_list": [ 00:19:01.291 { 00:19:01.291 "name": null, 00:19:01.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.291 "is_configured": false, 00:19:01.291 "data_offset": 0, 00:19:01.291 "data_size": 7936 00:19:01.291 }, 00:19:01.291 { 00:19:01.291 "name": "BaseBdev2", 00:19:01.291 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:01.291 "is_configured": true, 00:19:01.291 "data_offset": 256, 00:19:01.291 "data_size": 7936 00:19:01.291 } 00:19:01.291 ] 00:19:01.291 }' 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.291 [2024-11-17 01:39:09.602158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.291 [2024-11-17 01:39:09.617215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.291 01:39:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:01.291 [2024-11-17 01:39:09.618947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.233 "name": "raid_bdev1", 00:19:02.233 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:02.233 "strip_size_kb": 0, 00:19:02.233 "state": "online", 00:19:02.233 "raid_level": "raid1", 00:19:02.233 "superblock": true, 00:19:02.233 "num_base_bdevs": 2, 00:19:02.233 "num_base_bdevs_discovered": 2, 00:19:02.233 "num_base_bdevs_operational": 2, 00:19:02.233 "process": { 00:19:02.233 "type": "rebuild", 00:19:02.233 "target": "spare", 00:19:02.233 "progress": { 00:19:02.233 "blocks": 2560, 00:19:02.233 "percent": 32 00:19:02.233 } 00:19:02.233 }, 00:19:02.233 "base_bdevs_list": [ 00:19:02.233 { 00:19:02.233 "name": "spare", 00:19:02.233 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:02.233 "is_configured": true, 00:19:02.233 "data_offset": 256, 00:19:02.233 "data_size": 7936 00:19:02.233 }, 00:19:02.233 { 00:19:02.233 "name": "BaseBdev2", 00:19:02.233 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:02.233 "is_configured": true, 00:19:02.233 "data_offset": 256, 00:19:02.233 "data_size": 7936 00:19:02.233 } 00:19:02.233 ] 00:19:02.233 }' 00:19:02.233 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:02.494 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=724 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.494 "name": "raid_bdev1", 00:19:02.494 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:02.494 "strip_size_kb": 0, 00:19:02.494 "state": "online", 00:19:02.494 "raid_level": "raid1", 00:19:02.494 "superblock": true, 00:19:02.494 "num_base_bdevs": 2, 00:19:02.494 "num_base_bdevs_discovered": 2, 00:19:02.494 "num_base_bdevs_operational": 2, 00:19:02.494 "process": { 00:19:02.494 "type": "rebuild", 00:19:02.494 "target": "spare", 00:19:02.494 "progress": { 00:19:02.494 "blocks": 2816, 00:19:02.494 "percent": 35 00:19:02.494 } 00:19:02.494 }, 00:19:02.494 "base_bdevs_list": [ 00:19:02.494 { 00:19:02.494 "name": "spare", 00:19:02.494 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:02.494 "is_configured": true, 00:19:02.494 "data_offset": 256, 00:19:02.494 "data_size": 7936 00:19:02.494 }, 00:19:02.494 { 00:19:02.494 "name": "BaseBdev2", 00:19:02.494 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:02.494 "is_configured": true, 00:19:02.494 "data_offset": 256, 00:19:02.494 "data_size": 7936 00:19:02.494 } 00:19:02.494 ] 00:19:02.494 }' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.494 01:39:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.876 "name": "raid_bdev1", 00:19:03.876 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:03.876 "strip_size_kb": 0, 00:19:03.876 "state": "online", 00:19:03.876 "raid_level": "raid1", 00:19:03.876 "superblock": true, 00:19:03.876 "num_base_bdevs": 2, 00:19:03.876 "num_base_bdevs_discovered": 2, 00:19:03.876 "num_base_bdevs_operational": 2, 00:19:03.876 "process": { 00:19:03.876 "type": "rebuild", 00:19:03.876 "target": "spare", 00:19:03.876 "progress": { 00:19:03.876 "blocks": 5632, 00:19:03.876 "percent": 70 00:19:03.876 } 00:19:03.876 }, 00:19:03.876 "base_bdevs_list": [ 00:19:03.876 { 00:19:03.876 "name": "spare", 00:19:03.876 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:03.876 "is_configured": true, 00:19:03.876 "data_offset": 256, 00:19:03.876 "data_size": 7936 00:19:03.876 }, 00:19:03.876 { 00:19:03.876 "name": "BaseBdev2", 00:19:03.876 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:03.876 "is_configured": true, 00:19:03.876 "data_offset": 256, 00:19:03.876 "data_size": 7936 00:19:03.876 } 00:19:03.876 ] 00:19:03.876 }' 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.876 01:39:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.876 01:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.876 01:39:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:04.446 [2024-11-17 01:39:12.730409] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:04.446 [2024-11-17 01:39:12.730470] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:04.446 [2024-11-17 01:39:12.730558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.706 "name": "raid_bdev1", 00:19:04.706 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:04.706 "strip_size_kb": 0, 00:19:04.706 "state": "online", 00:19:04.706 "raid_level": "raid1", 00:19:04.706 "superblock": true, 00:19:04.706 "num_base_bdevs": 2, 00:19:04.706 "num_base_bdevs_discovered": 2, 00:19:04.706 "num_base_bdevs_operational": 2, 00:19:04.706 "base_bdevs_list": [ 00:19:04.706 { 00:19:04.706 "name": "spare", 00:19:04.706 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:04.706 "is_configured": true, 00:19:04.706 "data_offset": 256, 00:19:04.706 "data_size": 7936 00:19:04.706 }, 00:19:04.706 { 00:19:04.706 "name": "BaseBdev2", 00:19:04.706 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:04.706 "is_configured": true, 00:19:04.706 "data_offset": 256, 00:19:04.706 "data_size": 7936 00:19:04.706 } 00:19:04.706 ] 00:19:04.706 }' 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.706 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:04.707 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.967 "name": "raid_bdev1", 00:19:04.967 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:04.967 "strip_size_kb": 0, 00:19:04.967 "state": "online", 00:19:04.967 "raid_level": "raid1", 00:19:04.967 "superblock": true, 00:19:04.967 "num_base_bdevs": 2, 00:19:04.967 "num_base_bdevs_discovered": 2, 00:19:04.967 "num_base_bdevs_operational": 2, 00:19:04.967 "base_bdevs_list": [ 00:19:04.967 { 00:19:04.967 "name": "spare", 00:19:04.967 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:04.967 "is_configured": true, 00:19:04.967 "data_offset": 256, 00:19:04.967 "data_size": 7936 00:19:04.967 }, 00:19:04.967 { 00:19:04.967 "name": "BaseBdev2", 00:19:04.967 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:04.967 "is_configured": true, 00:19:04.967 "data_offset": 256, 00:19:04.967 "data_size": 7936 00:19:04.967 } 00:19:04.967 ] 00:19:04.967 }' 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.967 "name": "raid_bdev1", 00:19:04.967 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:04.967 "strip_size_kb": 0, 00:19:04.967 "state": "online", 00:19:04.967 "raid_level": "raid1", 00:19:04.967 "superblock": true, 00:19:04.967 "num_base_bdevs": 2, 00:19:04.967 "num_base_bdevs_discovered": 2, 00:19:04.967 "num_base_bdevs_operational": 2, 00:19:04.967 "base_bdevs_list": [ 00:19:04.967 { 00:19:04.967 "name": "spare", 00:19:04.967 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:04.967 "is_configured": true, 00:19:04.967 "data_offset": 256, 00:19:04.967 "data_size": 7936 00:19:04.967 }, 00:19:04.967 { 00:19:04.967 "name": "BaseBdev2", 00:19:04.967 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:04.967 "is_configured": true, 00:19:04.967 "data_offset": 256, 00:19:04.967 "data_size": 7936 00:19:04.967 } 00:19:04.967 ] 00:19:04.967 }' 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.967 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.538 [2024-11-17 01:39:13.781685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.538 [2024-11-17 01:39:13.781720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.538 [2024-11-17 01:39:13.781812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.538 [2024-11-17 01:39:13.781877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.538 [2024-11-17 01:39:13.781886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.538 [2024-11-17 01:39:13.853543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.538 [2024-11-17 01:39:13.853651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.538 [2024-11-17 01:39:13.853675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:05.538 [2024-11-17 01:39:13.853684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.538 [2024-11-17 01:39:13.855597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.538 [2024-11-17 01:39:13.855636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.538 [2024-11-17 01:39:13.855689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:05.538 [2024-11-17 01:39:13.855748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.538 [2024-11-17 01:39:13.855860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.538 spare 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.538 [2024-11-17 01:39:13.955746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:05.538 [2024-11-17 01:39:13.955780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.538 [2024-11-17 01:39:13.955863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:05.538 [2024-11-17 01:39:13.955933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:05.538 [2024-11-17 01:39:13.955941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:05.538 [2024-11-17 01:39:13.956022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.538 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.539 "name": "raid_bdev1", 00:19:05.539 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:05.539 "strip_size_kb": 0, 00:19:05.539 "state": "online", 00:19:05.539 "raid_level": "raid1", 00:19:05.539 "superblock": true, 00:19:05.539 "num_base_bdevs": 2, 00:19:05.539 "num_base_bdevs_discovered": 2, 00:19:05.539 "num_base_bdevs_operational": 2, 00:19:05.539 "base_bdevs_list": [ 00:19:05.539 { 00:19:05.539 "name": "spare", 00:19:05.539 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:05.539 "is_configured": true, 00:19:05.539 "data_offset": 256, 00:19:05.539 "data_size": 7936 00:19:05.539 }, 00:19:05.539 { 00:19:05.539 "name": "BaseBdev2", 00:19:05.539 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:05.539 "is_configured": true, 00:19:05.539 "data_offset": 256, 00:19:05.539 "data_size": 7936 00:19:05.539 } 00:19:05.539 ] 00:19:05.539 }' 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.539 01:39:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.108 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.108 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.109 "name": "raid_bdev1", 00:19:06.109 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:06.109 "strip_size_kb": 0, 00:19:06.109 "state": "online", 00:19:06.109 "raid_level": "raid1", 00:19:06.109 "superblock": true, 00:19:06.109 "num_base_bdevs": 2, 00:19:06.109 "num_base_bdevs_discovered": 2, 00:19:06.109 "num_base_bdevs_operational": 2, 00:19:06.109 "base_bdevs_list": [ 00:19:06.109 { 00:19:06.109 "name": "spare", 00:19:06.109 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:06.109 "is_configured": true, 00:19:06.109 "data_offset": 256, 00:19:06.109 "data_size": 7936 00:19:06.109 }, 00:19:06.109 { 00:19:06.109 "name": "BaseBdev2", 00:19:06.109 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:06.109 "is_configured": true, 00:19:06.109 "data_offset": 256, 00:19:06.109 "data_size": 7936 00:19:06.109 } 00:19:06.109 ] 00:19:06.109 }' 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.109 [2024-11-17 01:39:14.524461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.109 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.369 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.369 "name": "raid_bdev1", 00:19:06.369 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:06.369 "strip_size_kb": 0, 00:19:06.369 "state": "online", 00:19:06.369 "raid_level": "raid1", 00:19:06.369 "superblock": true, 00:19:06.369 "num_base_bdevs": 2, 00:19:06.369 "num_base_bdevs_discovered": 1, 00:19:06.369 "num_base_bdevs_operational": 1, 00:19:06.369 "base_bdevs_list": [ 00:19:06.369 { 00:19:06.369 "name": null, 00:19:06.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.369 "is_configured": false, 00:19:06.369 "data_offset": 0, 00:19:06.369 "data_size": 7936 00:19:06.369 }, 00:19:06.369 { 00:19:06.369 "name": "BaseBdev2", 00:19:06.369 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:06.369 "is_configured": true, 00:19:06.369 "data_offset": 256, 00:19:06.369 "data_size": 7936 00:19:06.369 } 00:19:06.369 ] 00:19:06.369 }' 00:19:06.369 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.369 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.629 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.629 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.629 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.629 [2024-11-17 01:39:14.927913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.629 [2024-11-17 01:39:14.928134] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:06.629 [2024-11-17 01:39:14.928213] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:06.629 [2024-11-17 01:39:14.928273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.629 [2024-11-17 01:39:14.943266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:06.629 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.629 01:39:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:06.629 [2024-11-17 01:39:14.945075] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.569 01:39:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.569 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.569 "name": "raid_bdev1", 00:19:07.569 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:07.569 "strip_size_kb": 0, 00:19:07.569 "state": "online", 00:19:07.569 "raid_level": "raid1", 00:19:07.569 "superblock": true, 00:19:07.569 "num_base_bdevs": 2, 00:19:07.569 "num_base_bdevs_discovered": 2, 00:19:07.569 "num_base_bdevs_operational": 2, 00:19:07.569 "process": { 00:19:07.569 "type": "rebuild", 00:19:07.569 "target": "spare", 00:19:07.569 "progress": { 00:19:07.569 "blocks": 2560, 00:19:07.569 "percent": 32 00:19:07.569 } 00:19:07.569 }, 00:19:07.569 "base_bdevs_list": [ 00:19:07.569 { 00:19:07.569 "name": "spare", 00:19:07.569 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:07.569 "is_configured": true, 00:19:07.569 "data_offset": 256, 00:19:07.569 "data_size": 7936 00:19:07.569 }, 00:19:07.569 { 00:19:07.569 "name": "BaseBdev2", 00:19:07.569 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:07.569 "is_configured": true, 00:19:07.569 "data_offset": 256, 00:19:07.569 "data_size": 7936 00:19:07.569 } 00:19:07.569 ] 00:19:07.569 }' 00:19:07.569 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.829 [2024-11-17 01:39:16.088750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.829 [2024-11-17 01:39:16.149594] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:07.829 [2024-11-17 01:39:16.149653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.829 [2024-11-17 01:39:16.149667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.829 [2024-11-17 01:39:16.149675] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.829 "name": "raid_bdev1", 00:19:07.829 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:07.829 "strip_size_kb": 0, 00:19:07.829 "state": "online", 00:19:07.829 "raid_level": "raid1", 00:19:07.829 "superblock": true, 00:19:07.829 "num_base_bdevs": 2, 00:19:07.829 "num_base_bdevs_discovered": 1, 00:19:07.829 "num_base_bdevs_operational": 1, 00:19:07.829 "base_bdevs_list": [ 00:19:07.829 { 00:19:07.829 "name": null, 00:19:07.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.829 "is_configured": false, 00:19:07.829 "data_offset": 0, 00:19:07.829 "data_size": 7936 00:19:07.829 }, 00:19:07.829 { 00:19:07.829 "name": "BaseBdev2", 00:19:07.829 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:07.829 "is_configured": true, 00:19:07.829 "data_offset": 256, 00:19:07.829 "data_size": 7936 00:19:07.829 } 00:19:07.829 ] 00:19:07.829 }' 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.829 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.399 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:08.399 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.399 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.399 [2024-11-17 01:39:16.618304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:08.399 [2024-11-17 01:39:16.618413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.399 [2024-11-17 01:39:16.618452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:08.399 [2024-11-17 01:39:16.618483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.399 [2024-11-17 01:39:16.618666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.399 [2024-11-17 01:39:16.618715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:08.399 [2024-11-17 01:39:16.618800] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:08.399 [2024-11-17 01:39:16.618840] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.399 [2024-11-17 01:39:16.618876] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:08.399 [2024-11-17 01:39:16.618955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.399 [2024-11-17 01:39:16.633579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:08.399 spare 00:19:08.399 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.399 [2024-11-17 01:39:16.635360] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:08.399 01:39:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.339 "name": "raid_bdev1", 00:19:09.339 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:09.339 "strip_size_kb": 0, 00:19:09.339 "state": "online", 00:19:09.339 "raid_level": "raid1", 00:19:09.339 "superblock": true, 00:19:09.339 "num_base_bdevs": 2, 00:19:09.339 "num_base_bdevs_discovered": 2, 00:19:09.339 "num_base_bdevs_operational": 2, 00:19:09.339 "process": { 00:19:09.339 "type": "rebuild", 00:19:09.339 "target": "spare", 00:19:09.339 "progress": { 00:19:09.339 "blocks": 2560, 00:19:09.339 "percent": 32 00:19:09.339 } 00:19:09.339 }, 00:19:09.339 "base_bdevs_list": [ 00:19:09.339 { 00:19:09.339 "name": "spare", 00:19:09.339 "uuid": "124127a4-d17a-5096-af0d-855236e6dbb2", 00:19:09.339 "is_configured": true, 00:19:09.339 "data_offset": 256, 00:19:09.339 "data_size": 7936 00:19:09.339 }, 00:19:09.339 { 00:19:09.339 "name": "BaseBdev2", 00:19:09.339 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:09.339 "is_configured": true, 00:19:09.339 "data_offset": 256, 00:19:09.339 "data_size": 7936 00:19:09.339 } 00:19:09.339 ] 00:19:09.339 }' 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.339 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.600 [2024-11-17 01:39:17.803524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.600 [2024-11-17 01:39:17.839925] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:09.600 [2024-11-17 01:39:17.839975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.600 [2024-11-17 01:39:17.839991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.600 [2024-11-17 01:39:17.839997] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.600 "name": "raid_bdev1", 00:19:09.600 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:09.600 "strip_size_kb": 0, 00:19:09.600 "state": "online", 00:19:09.600 "raid_level": "raid1", 00:19:09.600 "superblock": true, 00:19:09.600 "num_base_bdevs": 2, 00:19:09.600 "num_base_bdevs_discovered": 1, 00:19:09.600 "num_base_bdevs_operational": 1, 00:19:09.600 "base_bdevs_list": [ 00:19:09.600 { 00:19:09.600 "name": null, 00:19:09.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.600 "is_configured": false, 00:19:09.600 "data_offset": 0, 00:19:09.600 "data_size": 7936 00:19:09.600 }, 00:19:09.600 { 00:19:09.600 "name": "BaseBdev2", 00:19:09.600 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:09.600 "is_configured": true, 00:19:09.600 "data_offset": 256, 00:19:09.600 "data_size": 7936 00:19:09.600 } 00:19:09.600 ] 00:19:09.600 }' 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.600 01:39:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.887 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.887 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.887 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.887 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.888 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.888 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.888 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.888 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.888 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.888 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.888 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.888 "name": "raid_bdev1", 00:19:09.888 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:09.888 "strip_size_kb": 0, 00:19:09.888 "state": "online", 00:19:09.888 "raid_level": "raid1", 00:19:09.888 "superblock": true, 00:19:09.888 "num_base_bdevs": 2, 00:19:09.888 "num_base_bdevs_discovered": 1, 00:19:09.888 "num_base_bdevs_operational": 1, 00:19:09.888 "base_bdevs_list": [ 00:19:09.888 { 00:19:09.888 "name": null, 00:19:09.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.888 "is_configured": false, 00:19:09.888 "data_offset": 0, 00:19:09.888 "data_size": 7936 00:19:09.888 }, 00:19:09.888 { 00:19:09.888 "name": "BaseBdev2", 00:19:09.888 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:09.888 "is_configured": true, 00:19:09.888 "data_offset": 256, 00:19:09.888 "data_size": 7936 00:19:09.888 } 00:19:09.888 ] 00:19:09.888 }' 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.148 [2024-11-17 01:39:18.436047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:10.148 [2024-11-17 01:39:18.436168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.148 [2024-11-17 01:39:18.436194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:10.148 [2024-11-17 01:39:18.436204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.148 [2024-11-17 01:39:18.436361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.148 [2024-11-17 01:39:18.436373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:10.148 [2024-11-17 01:39:18.436423] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:10.148 [2024-11-17 01:39:18.436435] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:10.148 [2024-11-17 01:39:18.436445] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:10.148 [2024-11-17 01:39:18.436455] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:10.148 BaseBdev1 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.148 01:39:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.089 "name": "raid_bdev1", 00:19:11.089 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:11.089 "strip_size_kb": 0, 00:19:11.089 "state": "online", 00:19:11.089 "raid_level": "raid1", 00:19:11.089 "superblock": true, 00:19:11.089 "num_base_bdevs": 2, 00:19:11.089 "num_base_bdevs_discovered": 1, 00:19:11.089 "num_base_bdevs_operational": 1, 00:19:11.089 "base_bdevs_list": [ 00:19:11.089 { 00:19:11.089 "name": null, 00:19:11.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.089 "is_configured": false, 00:19:11.089 "data_offset": 0, 00:19:11.089 "data_size": 7936 00:19:11.089 }, 00:19:11.089 { 00:19:11.089 "name": "BaseBdev2", 00:19:11.089 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:11.089 "is_configured": true, 00:19:11.089 "data_offset": 256, 00:19:11.089 "data_size": 7936 00:19:11.089 } 00:19:11.089 ] 00:19:11.089 }' 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.089 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.660 "name": "raid_bdev1", 00:19:11.660 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:11.660 "strip_size_kb": 0, 00:19:11.660 "state": "online", 00:19:11.660 "raid_level": "raid1", 00:19:11.660 "superblock": true, 00:19:11.660 "num_base_bdevs": 2, 00:19:11.660 "num_base_bdevs_discovered": 1, 00:19:11.660 "num_base_bdevs_operational": 1, 00:19:11.660 "base_bdevs_list": [ 00:19:11.660 { 00:19:11.660 "name": null, 00:19:11.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.660 "is_configured": false, 00:19:11.660 "data_offset": 0, 00:19:11.660 "data_size": 7936 00:19:11.660 }, 00:19:11.660 { 00:19:11.660 "name": "BaseBdev2", 00:19:11.660 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:11.660 "is_configured": true, 00:19:11.660 "data_offset": 256, 00:19:11.660 "data_size": 7936 00:19:11.660 } 00:19:11.660 ] 00:19:11.660 }' 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:11.660 01:39:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.660 [2024-11-17 01:39:20.009444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.660 [2024-11-17 01:39:20.009580] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:11.660 [2024-11-17 01:39:20.009596] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:11.660 request: 00:19:11.660 { 00:19:11.660 "base_bdev": "BaseBdev1", 00:19:11.660 "raid_bdev": "raid_bdev1", 00:19:11.660 "method": "bdev_raid_add_base_bdev", 00:19:11.660 "req_id": 1 00:19:11.660 } 00:19:11.660 Got JSON-RPC error response 00:19:11.660 response: 00:19:11.660 { 00:19:11.660 "code": -22, 00:19:11.660 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:11.660 } 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.660 01:39:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.601 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.861 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.861 "name": "raid_bdev1", 00:19:12.861 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:12.861 "strip_size_kb": 0, 00:19:12.861 "state": "online", 00:19:12.861 "raid_level": "raid1", 00:19:12.861 "superblock": true, 00:19:12.861 "num_base_bdevs": 2, 00:19:12.861 "num_base_bdevs_discovered": 1, 00:19:12.861 "num_base_bdevs_operational": 1, 00:19:12.861 "base_bdevs_list": [ 00:19:12.861 { 00:19:12.861 "name": null, 00:19:12.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.861 "is_configured": false, 00:19:12.861 "data_offset": 0, 00:19:12.861 "data_size": 7936 00:19:12.861 }, 00:19:12.861 { 00:19:12.861 "name": "BaseBdev2", 00:19:12.861 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:12.861 "is_configured": true, 00:19:12.861 "data_offset": 256, 00:19:12.861 "data_size": 7936 00:19:12.861 } 00:19:12.861 ] 00:19:12.861 }' 00:19:12.861 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.861 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.122 "name": "raid_bdev1", 00:19:13.122 "uuid": "1c0d7881-d38d-4784-a4f2-9aed552f18b1", 00:19:13.122 "strip_size_kb": 0, 00:19:13.122 "state": "online", 00:19:13.122 "raid_level": "raid1", 00:19:13.122 "superblock": true, 00:19:13.122 "num_base_bdevs": 2, 00:19:13.122 "num_base_bdevs_discovered": 1, 00:19:13.122 "num_base_bdevs_operational": 1, 00:19:13.122 "base_bdevs_list": [ 00:19:13.122 { 00:19:13.122 "name": null, 00:19:13.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.122 "is_configured": false, 00:19:13.122 "data_offset": 0, 00:19:13.122 "data_size": 7936 00:19:13.122 }, 00:19:13.122 { 00:19:13.122 "name": "BaseBdev2", 00:19:13.122 "uuid": "273dffba-74e2-525e-9e5c-6f90eb94e605", 00:19:13.122 "is_configured": true, 00:19:13.122 "data_offset": 256, 00:19:13.122 "data_size": 7936 00:19:13.122 } 00:19:13.122 ] 00:19:13.122 }' 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.122 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88765 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88765 ']' 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88765 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88765 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88765' 00:19:13.382 killing process with pid 88765 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88765 00:19:13.382 Received shutdown signal, test time was about 60.000000 seconds 00:19:13.382 00:19:13.382 Latency(us) 00:19:13.382 [2024-11-17T01:39:21.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.382 [2024-11-17T01:39:21.842Z] =================================================================================================================== 00:19:13.382 [2024-11-17T01:39:21.842Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.382 [2024-11-17 01:39:21.632458] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.382 [2024-11-17 01:39:21.632575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.382 [2024-11-17 01:39:21.632624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.382 01:39:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88765 00:19:13.382 [2024-11-17 01:39:21.632635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:13.643 [2024-11-17 01:39:21.913478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.583 01:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:14.583 00:19:14.583 real 0m17.227s 00:19:14.583 user 0m22.562s 00:19:14.583 sys 0m1.653s 00:19:14.583 01:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.583 ************************************ 00:19:14.583 END TEST raid_rebuild_test_sb_md_interleaved 00:19:14.583 ************************************ 00:19:14.583 01:39:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.583 01:39:22 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:14.583 01:39:22 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:14.583 01:39:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88765 ']' 00:19:14.583 01:39:22 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88765 00:19:14.583 01:39:23 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:14.583 00:19:14.583 real 11m46.731s 00:19:14.583 user 15m55.183s 00:19:14.583 sys 1m52.951s 00:19:14.583 ************************************ 00:19:14.583 END TEST bdev_raid 00:19:14.583 ************************************ 00:19:14.583 01:39:23 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.583 01:39:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.843 01:39:23 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:14.843 01:39:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:14.843 01:39:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.843 01:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:14.843 ************************************ 00:19:14.843 START TEST spdkcli_raid 00:19:14.843 ************************************ 00:19:14.843 01:39:23 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:14.843 * Looking for test storage... 00:19:14.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:14.843 01:39:23 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:14.843 01:39:23 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:14.843 01:39:23 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.103 01:39:23 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.103 01:39:23 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:15.103 01:39:23 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.103 01:39:23 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:15.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.103 --rc genhtml_branch_coverage=1 00:19:15.103 --rc genhtml_function_coverage=1 00:19:15.103 --rc genhtml_legend=1 00:19:15.103 --rc geninfo_all_blocks=1 00:19:15.103 --rc geninfo_unexecuted_blocks=1 00:19:15.103 00:19:15.103 ' 00:19:15.103 01:39:23 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:15.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.103 --rc genhtml_branch_coverage=1 00:19:15.103 --rc genhtml_function_coverage=1 00:19:15.103 --rc genhtml_legend=1 00:19:15.103 --rc geninfo_all_blocks=1 00:19:15.103 --rc geninfo_unexecuted_blocks=1 00:19:15.104 00:19:15.104 ' 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:15.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.104 --rc genhtml_branch_coverage=1 00:19:15.104 --rc genhtml_function_coverage=1 00:19:15.104 --rc genhtml_legend=1 00:19:15.104 --rc geninfo_all_blocks=1 00:19:15.104 --rc geninfo_unexecuted_blocks=1 00:19:15.104 00:19:15.104 ' 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:15.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.104 --rc genhtml_branch_coverage=1 00:19:15.104 --rc genhtml_function_coverage=1 00:19:15.104 --rc genhtml_legend=1 00:19:15.104 --rc geninfo_all_blocks=1 00:19:15.104 --rc geninfo_unexecuted_blocks=1 00:19:15.104 00:19:15.104 ' 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:15.104 01:39:23 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89436 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:15.104 01:39:23 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89436 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89436 ']' 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.104 01:39:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.104 [2024-11-17 01:39:23.477156] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:15.104 [2024-11-17 01:39:23.477372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89436 ] 00:19:15.364 [2024-11-17 01:39:23.656795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:15.364 [2024-11-17 01:39:23.767074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.364 [2024-11-17 01:39:23.767104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.304 01:39:24 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.304 01:39:24 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:16.304 01:39:24 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:16.304 01:39:24 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.304 01:39:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.304 01:39:24 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:16.304 01:39:24 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.304 01:39:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.304 01:39:24 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:16.304 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:16.304 ' 00:19:17.763 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:17.764 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:18.023 01:39:26 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:18.023 01:39:26 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.023 01:39:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.023 01:39:26 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:18.023 01:39:26 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.023 01:39:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.023 01:39:26 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:18.023 ' 00:19:18.962 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:19.221 01:39:27 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:19.221 01:39:27 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.221 01:39:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.221 01:39:27 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:19.221 01:39:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.221 01:39:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.221 01:39:27 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:19.221 01:39:27 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:19.791 01:39:28 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:19.791 01:39:28 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:19.791 01:39:28 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:19.791 01:39:28 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.791 01:39:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.791 01:39:28 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:19.791 01:39:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.791 01:39:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.791 01:39:28 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:19.791 ' 00:19:20.731 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:20.731 01:39:29 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:20.731 01:39:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.731 01:39:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:20.991 01:39:29 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:20.991 01:39:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.991 01:39:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:20.991 01:39:29 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:20.991 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:20.991 ' 00:19:22.371 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:22.371 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:22.371 01:39:30 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.371 01:39:30 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89436 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89436 ']' 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89436 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89436 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89436' 00:19:22.371 killing process with pid 89436 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89436 00:19:22.371 01:39:30 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89436 00:19:24.912 01:39:33 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:24.912 01:39:33 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89436 ']' 00:19:24.912 Process with pid 89436 is not found 00:19:24.912 01:39:33 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89436 00:19:24.912 01:39:33 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89436 ']' 00:19:24.912 01:39:33 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89436 00:19:24.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89436) - No such process 00:19:24.912 01:39:33 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89436 is not found' 00:19:24.912 01:39:33 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:24.912 01:39:33 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:24.912 01:39:33 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:24.912 01:39:33 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:24.912 00:19:24.912 real 0m9.950s 00:19:24.912 user 0m20.451s 00:19:24.912 sys 0m1.172s 00:19:24.912 01:39:33 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.912 01:39:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:24.912 ************************************ 00:19:24.912 END TEST spdkcli_raid 00:19:24.912 ************************************ 00:19:24.912 01:39:33 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:24.912 01:39:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.912 01:39:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.912 01:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:24.912 ************************************ 00:19:24.912 START TEST blockdev_raid5f 00:19:24.912 ************************************ 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:24.912 * Looking for test storage... 00:19:24.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.912 01:39:33 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:24.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.912 --rc genhtml_branch_coverage=1 00:19:24.912 --rc genhtml_function_coverage=1 00:19:24.912 --rc genhtml_legend=1 00:19:24.912 --rc geninfo_all_blocks=1 00:19:24.912 --rc geninfo_unexecuted_blocks=1 00:19:24.912 00:19:24.912 ' 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:24.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.912 --rc genhtml_branch_coverage=1 00:19:24.912 --rc genhtml_function_coverage=1 00:19:24.912 --rc genhtml_legend=1 00:19:24.912 --rc geninfo_all_blocks=1 00:19:24.912 --rc geninfo_unexecuted_blocks=1 00:19:24.912 00:19:24.912 ' 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:24.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.912 --rc genhtml_branch_coverage=1 00:19:24.912 --rc genhtml_function_coverage=1 00:19:24.912 --rc genhtml_legend=1 00:19:24.912 --rc geninfo_all_blocks=1 00:19:24.912 --rc geninfo_unexecuted_blocks=1 00:19:24.912 00:19:24.912 ' 00:19:24.912 01:39:33 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:24.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.912 --rc genhtml_branch_coverage=1 00:19:24.912 --rc genhtml_function_coverage=1 00:19:24.912 --rc genhtml_legend=1 00:19:24.912 --rc geninfo_all_blocks=1 00:19:24.912 --rc geninfo_unexecuted_blocks=1 00:19:24.912 00:19:24.912 ' 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:24.912 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89719 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:25.173 01:39:33 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89719 00:19:25.173 01:39:33 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89719 ']' 00:19:25.173 01:39:33 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.173 01:39:33 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.173 01:39:33 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.173 01:39:33 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.173 01:39:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:25.173 [2024-11-17 01:39:33.470950] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:25.173 [2024-11-17 01:39:33.471126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89719 ] 00:19:25.433 [2024-11-17 01:39:33.641675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.433 [2024-11-17 01:39:33.749631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.372 01:39:34 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.372 01:39:34 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:26.372 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:26.372 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:26.372 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.373 Malloc0 00:19:26.373 Malloc1 00:19:26.373 Malloc2 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.373 01:39:34 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:26.373 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cfab5bd1-44e8-4477-9314-f7f9605a0755"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cfab5bd1-44e8-4477-9314-f7f9605a0755",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cfab5bd1-44e8-4477-9314-f7f9605a0755",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c59db4e7-259e-47ed-8b75-6bae5bd4d03f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0c83e729-75a9-497f-94de-4f198de3f266",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4a48b032-4730-493d-8e4e-22139a1ccb5f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:26.633 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:26.633 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:26.633 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:26.633 01:39:34 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89719 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89719 ']' 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89719 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89719 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.633 killing process with pid 89719 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89719' 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89719 00:19:26.633 01:39:34 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89719 00:19:29.178 01:39:37 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:29.178 01:39:37 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:29.178 01:39:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:29.178 01:39:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.178 01:39:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.178 ************************************ 00:19:29.178 START TEST bdev_hello_world 00:19:29.178 ************************************ 00:19:29.178 01:39:37 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:29.178 [2024-11-17 01:39:37.451898] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:29.178 [2024-11-17 01:39:37.452011] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89781 ] 00:19:29.178 [2024-11-17 01:39:37.632524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.439 [2024-11-17 01:39:37.741184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.009 [2024-11-17 01:39:38.263496] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:30.009 [2024-11-17 01:39:38.263543] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:30.009 [2024-11-17 01:39:38.263559] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:30.009 [2024-11-17 01:39:38.264020] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:30.009 [2024-11-17 01:39:38.264144] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:30.009 [2024-11-17 01:39:38.264160] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:30.009 [2024-11-17 01:39:38.264205] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:30.009 00:19:30.009 [2024-11-17 01:39:38.264222] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:31.402 00:19:31.403 real 0m2.181s 00:19:31.403 user 0m1.815s 00:19:31.403 sys 0m0.245s 00:19:31.403 ************************************ 00:19:31.403 END TEST bdev_hello_world 00:19:31.403 ************************************ 00:19:31.403 01:39:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.403 01:39:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:31.403 01:39:39 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:31.403 01:39:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.403 01:39:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.403 01:39:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:31.403 ************************************ 00:19:31.403 START TEST bdev_bounds 00:19:31.403 ************************************ 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:31.403 Process bdevio pid: 89823 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89823 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89823' 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89823 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89823 ']' 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.403 01:39:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:31.403 [2024-11-17 01:39:39.717398] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:31.403 [2024-11-17 01:39:39.717625] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89823 ] 00:19:31.662 [2024-11-17 01:39:39.895546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.662 [2024-11-17 01:39:40.010701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.662 [2024-11-17 01:39:40.010842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.662 [2024-11-17 01:39:40.010893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.232 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.232 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:32.232 01:39:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:32.232 I/O targets: 00:19:32.232 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:32.232 00:19:32.232 00:19:32.232 CUnit - A unit testing framework for C - Version 2.1-3 00:19:32.232 http://cunit.sourceforge.net/ 00:19:32.232 00:19:32.232 00:19:32.232 Suite: bdevio tests on: raid5f 00:19:32.232 Test: blockdev write read block ...passed 00:19:32.232 Test: blockdev write zeroes read block ...passed 00:19:32.232 Test: blockdev write zeroes read no split ...passed 00:19:32.492 Test: blockdev write zeroes read split ...passed 00:19:32.492 Test: blockdev write zeroes read split partial ...passed 00:19:32.492 Test: blockdev reset ...passed 00:19:32.492 Test: blockdev write read 8 blocks ...passed 00:19:32.492 Test: blockdev write read size > 128k ...passed 00:19:32.492 Test: blockdev write read invalid size ...passed 00:19:32.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.492 Test: blockdev write read max offset ...passed 00:19:32.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.492 Test: blockdev writev readv 8 blocks ...passed 00:19:32.492 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.492 Test: blockdev writev readv block ...passed 00:19:32.492 Test: blockdev writev readv size > 128k ...passed 00:19:32.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.492 Test: blockdev comparev and writev ...passed 00:19:32.492 Test: blockdev nvme passthru rw ...passed 00:19:32.492 Test: blockdev nvme passthru vendor specific ...passed 00:19:32.492 Test: blockdev nvme admin passthru ...passed 00:19:32.492 Test: blockdev copy ...passed 00:19:32.492 00:19:32.492 Run Summary: Type Total Ran Passed Failed Inactive 00:19:32.492 suites 1 1 n/a 0 0 00:19:32.492 tests 23 23 23 0 0 00:19:32.492 asserts 130 130 130 0 n/a 00:19:32.492 00:19:32.492 Elapsed time = 0.610 seconds 00:19:32.492 0 00:19:32.492 01:39:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89823 00:19:32.492 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89823 ']' 00:19:32.492 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89823 00:19:32.492 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:32.492 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.492 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89823 00:19:32.751 killing process with pid 89823 00:19:32.751 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.751 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.751 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89823' 00:19:32.751 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89823 00:19:32.751 01:39:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89823 00:19:34.135 01:39:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:34.135 00:19:34.135 real 0m2.643s 00:19:34.135 user 0m6.530s 00:19:34.135 sys 0m0.397s 00:19:34.135 01:39:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.135 01:39:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 ************************************ 00:19:34.135 END TEST bdev_bounds 00:19:34.135 ************************************ 00:19:34.135 01:39:42 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:34.135 01:39:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:34.135 01:39:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.135 01:39:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 ************************************ 00:19:34.135 START TEST bdev_nbd 00:19:34.135 ************************************ 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89888 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89888 /var/tmp/spdk-nbd.sock 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89888 ']' 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:34.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.135 01:39:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 [2024-11-17 01:39:42.438884] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:34.135 [2024-11-17 01:39:42.439049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.395 [2024-11-17 01:39:42.612507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.395 [2024-11-17 01:39:42.717905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:34.966 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:35.225 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:35.225 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:35.225 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:35.225 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:35.226 1+0 records in 00:19:35.226 1+0 records out 00:19:35.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423222 s, 9.7 MB/s 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:35.226 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:35.486 { 00:19:35.486 "nbd_device": "/dev/nbd0", 00:19:35.486 "bdev_name": "raid5f" 00:19:35.486 } 00:19:35.486 ]' 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:35.486 { 00:19:35.486 "nbd_device": "/dev/nbd0", 00:19:35.486 "bdev_name": "raid5f" 00:19:35.486 } 00:19:35.486 ]' 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.486 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.746 01:39:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:36.006 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:36.006 /dev/nbd0 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:36.267 1+0 records in 00:19:36.267 1+0 records out 00:19:36.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629978 s, 6.5 MB/s 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:36.267 { 00:19:36.267 "nbd_device": "/dev/nbd0", 00:19:36.267 "bdev_name": "raid5f" 00:19:36.267 } 00:19:36.267 ]' 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:36.267 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:36.267 { 00:19:36.267 "nbd_device": "/dev/nbd0", 00:19:36.267 "bdev_name": "raid5f" 00:19:36.267 } 00:19:36.267 ]' 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:36.527 256+0 records in 00:19:36.527 256+0 records out 00:19:36.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014613 s, 71.8 MB/s 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:36.527 256+0 records in 00:19:36.527 256+0 records out 00:19:36.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310246 s, 33.8 MB/s 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.527 01:39:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:36.788 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:37.048 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:37.308 malloc_lvol_verify 00:19:37.308 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:37.308 b303b261-ae05-4703-b358-ce59bcaceafb 00:19:37.308 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:37.568 f9ebe1e3-0bdc-4080-9900-cca840ecce1c 00:19:37.568 01:39:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:37.827 /dev/nbd0 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:37.827 mke2fs 1.47.0 (5-Feb-2023) 00:19:37.827 Discarding device blocks: 0/4096 done 00:19:37.827 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:37.827 00:19:37.827 Allocating group tables: 0/1 done 00:19:37.827 Writing inode tables: 0/1 done 00:19:37.827 Creating journal (1024 blocks): done 00:19:37.827 Writing superblocks and filesystem accounting information: 0/1 done 00:19:37.827 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.827 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89888 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89888 ']' 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89888 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89888 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.088 killing process with pid 89888 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89888' 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89888 00:19:38.088 01:39:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89888 00:19:39.470 01:39:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:39.470 00:19:39.470 real 0m5.440s 00:19:39.470 user 0m7.352s 00:19:39.470 sys 0m1.328s 00:19:39.470 01:39:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.470 01:39:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:39.470 ************************************ 00:19:39.470 END TEST bdev_nbd 00:19:39.470 ************************************ 00:19:39.470 01:39:47 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:39.470 01:39:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:39.470 01:39:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:39.470 01:39:47 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:39.470 01:39:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.470 01:39:47 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.470 01:39:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.470 ************************************ 00:19:39.470 START TEST bdev_fio 00:19:39.470 ************************************ 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:39.470 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:39.470 01:39:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:39.731 ************************************ 00:19:39.731 START TEST bdev_fio_rw_verify 00:19:39.731 ************************************ 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:39.731 01:39:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:40.009 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:40.009 fio-3.35 00:19:40.009 Starting 1 thread 00:19:52.284 00:19:52.284 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90078: Sun Nov 17 01:39:59 2024 00:19:52.284 read: IOPS=12.5k, BW=48.9MiB/s (51.3MB/s)(489MiB/10000msec) 00:19:52.284 slat (usec): min=16, max=258, avg=18.66, stdev= 2.06 00:19:52.284 clat (usec): min=10, max=495, avg=127.40, stdev=44.34 00:19:52.284 lat (usec): min=29, max=514, avg=146.06, stdev=44.55 00:19:52.284 clat percentiles (usec): 00:19:52.284 | 50.000th=[ 131], 99.000th=[ 210], 99.900th=[ 235], 99.990th=[ 400], 00:19:52.284 | 99.999th=[ 478] 00:19:52.284 write: IOPS=13.1k, BW=51.2MiB/s (53.6MB/s)(505MiB/9878msec); 0 zone resets 00:19:52.284 slat (usec): min=7, max=307, avg=16.03, stdev= 4.38 00:19:52.284 clat (usec): min=58, max=1632, avg=296.76, stdev=48.33 00:19:52.284 lat (usec): min=72, max=1886, avg=312.78, stdev=50.15 00:19:52.284 clat percentiles (usec): 00:19:52.284 | 50.000th=[ 302], 99.000th=[ 375], 99.900th=[ 873], 99.990th=[ 1467], 00:19:52.284 | 99.999th=[ 1549] 00:19:52.284 bw ( KiB/s): min=48856, max=54352, per=98.88%, avg=51794.11, stdev=1541.43, samples=19 00:19:52.284 iops : min=12214, max=13588, avg=12948.53, stdev=385.36, samples=19 00:19:52.284 lat (usec) : 20=0.01%, 50=0.01%, 100=16.63%, 250=38.85%, 500=44.35% 00:19:52.284 lat (usec) : 750=0.10%, 1000=0.04% 00:19:52.284 lat (msec) : 2=0.03% 00:19:52.284 cpu : usr=98.82%, sys=0.46%, ctx=27, majf=0, minf=10222 00:19:52.284 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.284 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.284 issued rwts: total=125308,129354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.284 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:52.284 00:19:52.284 Run status group 0 (all jobs): 00:19:52.284 READ: bw=48.9MiB/s (51.3MB/s), 48.9MiB/s-48.9MiB/s (51.3MB/s-51.3MB/s), io=489MiB (513MB), run=10000-10000msec 00:19:52.284 WRITE: bw=51.2MiB/s (53.6MB/s), 51.2MiB/s-51.2MiB/s (53.6MB/s-53.6MB/s), io=505MiB (530MB), run=9878-9878msec 00:19:52.284 ----------------------------------------------------- 00:19:52.284 Suppressions used: 00:19:52.284 count bytes template 00:19:52.284 1 7 /usr/src/fio/parse.c 00:19:52.284 161 15456 /usr/src/fio/iolog.c 00:19:52.284 1 8 libtcmalloc_minimal.so 00:19:52.284 1 904 libcrypto.so 00:19:52.284 ----------------------------------------------------- 00:19:52.284 00:19:52.284 00:19:52.284 real 0m12.615s 00:19:52.284 user 0m12.922s 00:19:52.284 sys 0m0.693s 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:52.284 ************************************ 00:19:52.284 END TEST bdev_fio_rw_verify 00:19:52.284 ************************************ 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:52.284 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:52.285 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:52.285 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:52.285 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:52.285 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:52.285 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:52.285 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:52.285 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "cfab5bd1-44e8-4477-9314-f7f9605a0755"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "cfab5bd1-44e8-4477-9314-f7f9605a0755",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "cfab5bd1-44e8-4477-9314-f7f9605a0755",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c59db4e7-259e-47ed-8b75-6bae5bd4d03f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0c83e729-75a9-497f-94de-4f198de3f266",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "4a48b032-4730-493d-8e4e-22139a1ccb5f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:52.546 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:52.546 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:52.546 /home/vagrant/spdk_repo/spdk 00:19:52.546 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:52.546 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:52.546 01:40:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:52.546 00:19:52.546 real 0m12.911s 00:19:52.546 user 0m13.038s 00:19:52.546 sys 0m0.839s 00:19:52.546 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.546 01:40:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:52.546 ************************************ 00:19:52.546 END TEST bdev_fio 00:19:52.546 ************************************ 00:19:52.546 01:40:00 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:52.546 01:40:00 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:52.546 01:40:00 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:52.546 01:40:00 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.546 01:40:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.546 ************************************ 00:19:52.546 START TEST bdev_verify 00:19:52.546 ************************************ 00:19:52.546 01:40:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:52.546 [2024-11-17 01:40:00.937409] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:52.546 [2024-11-17 01:40:00.937535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90242 ] 00:19:52.806 [2024-11-17 01:40:01.114852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:52.806 [2024-11-17 01:40:01.224893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.806 [2024-11-17 01:40:01.224923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.375 Running I/O for 5 seconds... 00:19:55.696 10731.00 IOPS, 41.92 MiB/s [2024-11-17T01:40:05.096Z] 10842.00 IOPS, 42.35 MiB/s [2024-11-17T01:40:06.036Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-17T01:40:06.976Z] 10873.25 IOPS, 42.47 MiB/s [2024-11-17T01:40:06.976Z] 10869.80 IOPS, 42.46 MiB/s 00:19:58.516 Latency(us) 00:19:58.516 [2024-11-17T01:40:06.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.516 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:58.516 Verification LBA range: start 0x0 length 0x2000 00:19:58.516 raid5f : 5.02 4393.31 17.16 0.00 0.00 43807.04 252.20 30678.86 00:19:58.516 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:58.516 Verification LBA range: start 0x2000 length 0x2000 00:19:58.516 raid5f : 5.01 6470.05 25.27 0.00 0.00 29780.39 1459.54 21520.99 00:19:58.516 [2024-11-17T01:40:06.976Z] =================================================================================================================== 00:19:58.516 [2024-11-17T01:40:06.976Z] Total : 10863.37 42.44 0.00 0.00 35455.71 252.20 30678.86 00:19:59.897 00:19:59.898 real 0m7.241s 00:19:59.898 user 0m13.390s 00:19:59.898 sys 0m0.285s 00:19:59.898 01:40:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.898 01:40:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:59.898 ************************************ 00:19:59.898 END TEST bdev_verify 00:19:59.898 ************************************ 00:19:59.898 01:40:08 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:59.898 01:40:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:59.898 01:40:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.898 01:40:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:59.898 ************************************ 00:19:59.898 START TEST bdev_verify_big_io 00:19:59.898 ************************************ 00:19:59.898 01:40:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:59.898 [2024-11-17 01:40:08.242200] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:59.898 [2024-11-17 01:40:08.242310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90341 ] 00:20:00.157 [2024-11-17 01:40:08.416430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:00.157 [2024-11-17 01:40:08.528161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.157 [2024-11-17 01:40:08.528183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.727 Running I/O for 5 seconds... 00:20:03.062 633.00 IOPS, 39.56 MiB/s [2024-11-17T01:40:12.462Z] 761.00 IOPS, 47.56 MiB/s [2024-11-17T01:40:13.402Z] 782.00 IOPS, 48.88 MiB/s [2024-11-17T01:40:14.342Z] 793.25 IOPS, 49.58 MiB/s [2024-11-17T01:40:14.602Z] 812.00 IOPS, 50.75 MiB/s 00:20:06.142 Latency(us) 00:20:06.142 [2024-11-17T01:40:14.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.142 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:06.142 Verification LBA range: start 0x0 length 0x200 00:20:06.142 raid5f : 5.33 356.82 22.30 0.00 0.00 8870246.76 228.95 386462.07 00:20:06.142 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:06.142 Verification LBA range: start 0x200 length 0x200 00:20:06.142 raid5f : 5.22 449.73 28.11 0.00 0.00 7053237.61 200.33 302209.68 00:20:06.142 [2024-11-17T01:40:14.602Z] =================================================================================================================== 00:20:06.142 [2024-11-17T01:40:14.602Z] Total : 806.55 50.41 0.00 0.00 7866638.79 200.33 386462.07 00:20:07.521 00:20:07.521 real 0m7.523s 00:20:07.521 user 0m13.989s 00:20:07.521 sys 0m0.250s 00:20:07.521 01:40:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.521 01:40:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.521 ************************************ 00:20:07.521 END TEST bdev_verify_big_io 00:20:07.521 ************************************ 00:20:07.521 01:40:15 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:07.521 01:40:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:07.521 01:40:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.521 01:40:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:07.521 ************************************ 00:20:07.521 START TEST bdev_write_zeroes 00:20:07.521 ************************************ 00:20:07.521 01:40:15 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:07.521 [2024-11-17 01:40:15.841294] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:07.521 [2024-11-17 01:40:15.841413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90439 ] 00:20:07.781 [2024-11-17 01:40:16.020249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.781 [2024-11-17 01:40:16.128957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.351 Running I/O for 1 seconds... 00:20:09.291 30399.00 IOPS, 118.75 MiB/s 00:20:09.291 Latency(us) 00:20:09.291 [2024-11-17T01:40:17.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.291 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:09.291 raid5f : 1.01 30373.07 118.64 0.00 0.00 4200.99 1294.98 5752.29 00:20:09.291 [2024-11-17T01:40:17.751Z] =================================================================================================================== 00:20:09.291 [2024-11-17T01:40:17.751Z] Total : 30373.07 118.64 0.00 0.00 4200.99 1294.98 5752.29 00:20:10.673 00:20:10.673 real 0m3.209s 00:20:10.673 user 0m2.823s 00:20:10.673 sys 0m0.259s 00:20:10.673 01:40:18 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.673 01:40:18 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:10.673 ************************************ 00:20:10.673 END TEST bdev_write_zeroes 00:20:10.673 ************************************ 00:20:10.673 01:40:19 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.673 01:40:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:10.673 01:40:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.673 01:40:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:10.673 ************************************ 00:20:10.673 START TEST bdev_json_nonenclosed 00:20:10.673 ************************************ 00:20:10.674 01:40:19 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.674 [2024-11-17 01:40:19.128889] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:10.674 [2024-11-17 01:40:19.129021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90487 ] 00:20:10.941 [2024-11-17 01:40:19.309068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.213 [2024-11-17 01:40:19.416418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.213 [2024-11-17 01:40:19.416516] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:11.213 [2024-11-17 01:40:19.416541] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:11.213 [2024-11-17 01:40:19.416551] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:11.213 00:20:11.213 real 0m0.620s 00:20:11.213 user 0m0.372s 00:20:11.213 sys 0m0.142s 00:20:11.213 01:40:19 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.213 01:40:19 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:11.213 ************************************ 00:20:11.213 END TEST bdev_json_nonenclosed 00:20:11.213 ************************************ 00:20:11.495 01:40:19 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:11.495 01:40:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:11.495 01:40:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.495 01:40:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:11.495 ************************************ 00:20:11.495 START TEST bdev_json_nonarray 00:20:11.495 ************************************ 00:20:11.495 01:40:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:11.495 [2024-11-17 01:40:19.811449] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:11.495 [2024-11-17 01:40:19.811561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90519 ] 00:20:11.755 [2024-11-17 01:40:19.984959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.755 [2024-11-17 01:40:20.091025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.755 [2024-11-17 01:40:20.091120] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:11.755 [2024-11-17 01:40:20.091137] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:11.755 [2024-11-17 01:40:20.091154] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:12.015 ************************************ 00:20:12.015 END TEST bdev_json_nonarray 00:20:12.015 ************************************ 00:20:12.015 00:20:12.015 real 0m0.600s 00:20:12.015 user 0m0.363s 00:20:12.015 sys 0m0.133s 00:20:12.015 01:40:20 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.015 01:40:20 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:12.015 01:40:20 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:12.015 00:20:12.015 real 0m47.282s 00:20:12.015 user 1m3.962s 00:20:12.015 sys 0m4.999s 00:20:12.015 ************************************ 00:20:12.015 END TEST blockdev_raid5f 00:20:12.015 ************************************ 00:20:12.015 01:40:20 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.015 01:40:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:12.015 01:40:20 -- spdk/autotest.sh@194 -- # uname -s 00:20:12.015 01:40:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:12.015 01:40:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:12.015 01:40:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:12.015 01:40:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:12.015 01:40:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:12.015 01:40:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:12.015 01:40:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.015 01:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:12.275 01:40:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:12.275 01:40:20 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:12.275 01:40:20 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:12.275 01:40:20 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:12.275 01:40:20 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:12.275 01:40:20 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:12.275 01:40:20 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:12.275 01:40:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.275 01:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:12.275 01:40:20 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:12.275 01:40:20 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:12.275 01:40:20 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:12.275 01:40:20 -- common/autotest_common.sh@10 -- # set +x 00:20:14.818 INFO: APP EXITING 00:20:14.818 INFO: killing all VMs 00:20:14.818 INFO: killing vhost app 00:20:14.818 INFO: EXIT DONE 00:20:15.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.078 Waiting for block devices as requested 00:20:15.078 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:15.078 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:16.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.279 Cleaning 00:20:16.279 Removing: /var/run/dpdk/spdk0/config 00:20:16.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:16.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:16.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:16.279 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:16.279 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:16.279 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:16.279 Removing: /dev/shm/spdk_tgt_trace.pid56837 00:20:16.279 Removing: /var/run/dpdk/spdk0 00:20:16.279 Removing: /var/run/dpdk/spdk_pid56602 00:20:16.279 Removing: /var/run/dpdk/spdk_pid56837 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57066 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57170 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57222 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57354 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57372 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57578 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57689 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57796 00:20:16.279 Removing: /var/run/dpdk/spdk_pid57917 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58021 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58066 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58097 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58173 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58301 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58738 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58813 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58882 00:20:16.279 Removing: /var/run/dpdk/spdk_pid58902 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59048 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59064 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59214 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59232 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59302 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59325 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59395 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59413 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59619 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59650 00:20:16.279 Removing: /var/run/dpdk/spdk_pid59739 00:20:16.279 Removing: /var/run/dpdk/spdk_pid61072 00:20:16.279 Removing: /var/run/dpdk/spdk_pid61278 00:20:16.279 Removing: /var/run/dpdk/spdk_pid61424 00:20:16.279 Removing: /var/run/dpdk/spdk_pid62056 00:20:16.279 Removing: /var/run/dpdk/spdk_pid62268 00:20:16.279 Removing: /var/run/dpdk/spdk_pid62408 00:20:16.279 Removing: /var/run/dpdk/spdk_pid63046 00:20:16.279 Removing: /var/run/dpdk/spdk_pid63370 00:20:16.279 Removing: /var/run/dpdk/spdk_pid63510 00:20:16.279 Removing: /var/run/dpdk/spdk_pid64893 00:20:16.279 Removing: /var/run/dpdk/spdk_pid65146 00:20:16.540 Removing: /var/run/dpdk/spdk_pid65286 00:20:16.540 Removing: /var/run/dpdk/spdk_pid66670 00:20:16.540 Removing: /var/run/dpdk/spdk_pid66920 00:20:16.540 Removing: /var/run/dpdk/spdk_pid67065 00:20:16.540 Removing: /var/run/dpdk/spdk_pid68445 00:20:16.540 Removing: /var/run/dpdk/spdk_pid68891 00:20:16.540 Removing: /var/run/dpdk/spdk_pid69031 00:20:16.540 Removing: /var/run/dpdk/spdk_pid70511 00:20:16.540 Removing: /var/run/dpdk/spdk_pid70770 00:20:16.540 Removing: /var/run/dpdk/spdk_pid70918 00:20:16.540 Removing: /var/run/dpdk/spdk_pid72397 00:20:16.540 Removing: /var/run/dpdk/spdk_pid72663 00:20:16.540 Removing: /var/run/dpdk/spdk_pid72805 00:20:16.540 Removing: /var/run/dpdk/spdk_pid74285 00:20:16.540 Removing: /var/run/dpdk/spdk_pid74778 00:20:16.540 Removing: /var/run/dpdk/spdk_pid74929 00:20:16.540 Removing: /var/run/dpdk/spdk_pid75069 00:20:16.540 Removing: /var/run/dpdk/spdk_pid75486 00:20:16.540 Removing: /var/run/dpdk/spdk_pid76210 00:20:16.540 Removing: /var/run/dpdk/spdk_pid76604 00:20:16.540 Removing: /var/run/dpdk/spdk_pid77314 00:20:16.540 Removing: /var/run/dpdk/spdk_pid77750 00:20:16.540 Removing: /var/run/dpdk/spdk_pid78501 00:20:16.540 Removing: /var/run/dpdk/spdk_pid78911 00:20:16.540 Removing: /var/run/dpdk/spdk_pid80874 00:20:16.540 Removing: /var/run/dpdk/spdk_pid81308 00:20:16.540 Removing: /var/run/dpdk/spdk_pid81748 00:20:16.540 Removing: /var/run/dpdk/spdk_pid83850 00:20:16.540 Removing: /var/run/dpdk/spdk_pid84334 00:20:16.540 Removing: /var/run/dpdk/spdk_pid84853 00:20:16.540 Removing: /var/run/dpdk/spdk_pid85913 00:20:16.540 Removing: /var/run/dpdk/spdk_pid86236 00:20:16.540 Removing: /var/run/dpdk/spdk_pid87174 00:20:16.540 Removing: /var/run/dpdk/spdk_pid87503 00:20:16.540 Removing: /var/run/dpdk/spdk_pid88436 00:20:16.540 Removing: /var/run/dpdk/spdk_pid88765 00:20:16.540 Removing: /var/run/dpdk/spdk_pid89436 00:20:16.540 Removing: /var/run/dpdk/spdk_pid89719 00:20:16.540 Removing: /var/run/dpdk/spdk_pid89781 00:20:16.540 Removing: /var/run/dpdk/spdk_pid89823 00:20:16.540 Removing: /var/run/dpdk/spdk_pid90068 00:20:16.540 Removing: /var/run/dpdk/spdk_pid90242 00:20:16.540 Removing: /var/run/dpdk/spdk_pid90341 00:20:16.540 Removing: /var/run/dpdk/spdk_pid90439 00:20:16.540 Removing: /var/run/dpdk/spdk_pid90487 00:20:16.540 Removing: /var/run/dpdk/spdk_pid90519 00:20:16.540 Clean 00:20:16.800 01:40:25 -- common/autotest_common.sh@1453 -- # return 0 00:20:16.800 01:40:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:16.800 01:40:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.800 01:40:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.800 01:40:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:16.800 01:40:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.800 01:40:25 -- common/autotest_common.sh@10 -- # set +x 00:20:16.800 01:40:25 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:16.800 01:40:25 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:16.800 01:40:25 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:16.800 01:40:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:16.800 01:40:25 -- spdk/autotest.sh@398 -- # hostname 00:20:16.800 01:40:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:17.060 geninfo: WARNING: invalid characters removed from testname! 00:20:43.630 01:40:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:43.630 01:40:51 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:45.012 01:40:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:46.922 01:40:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:48.832 01:40:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:50.742 01:40:58 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:52.667 01:41:00 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:52.667 01:41:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:52.667 01:41:00 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:52.667 01:41:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:52.667 01:41:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:52.667 01:41:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:52.667 + [[ -n 5414 ]] 00:20:52.667 + sudo kill 5414 00:20:52.694 [Pipeline] } 00:20:52.710 [Pipeline] // timeout 00:20:52.715 [Pipeline] } 00:20:52.730 [Pipeline] // stage 00:20:52.736 [Pipeline] } 00:20:52.750 [Pipeline] // catchError 00:20:52.760 [Pipeline] stage 00:20:52.763 [Pipeline] { (Stop VM) 00:20:52.775 [Pipeline] sh 00:20:53.059 + vagrant halt 00:20:55.600 ==> default: Halting domain... 00:21:03.744 [Pipeline] sh 00:21:04.026 + vagrant destroy -f 00:21:06.567 ==> default: Removing domain... 00:21:06.580 [Pipeline] sh 00:21:06.864 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:06.874 [Pipeline] } 00:21:06.884 [Pipeline] // stage 00:21:06.889 [Pipeline] } 00:21:06.901 [Pipeline] // dir 00:21:06.906 [Pipeline] } 00:21:06.922 [Pipeline] // wrap 00:21:06.930 [Pipeline] } 00:21:06.943 [Pipeline] // catchError 00:21:06.953 [Pipeline] stage 00:21:06.956 [Pipeline] { (Epilogue) 00:21:06.969 [Pipeline] sh 00:21:07.254 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:11.468 [Pipeline] catchError 00:21:11.470 [Pipeline] { 00:21:11.482 [Pipeline] sh 00:21:11.768 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:11.768 Artifacts sizes are good 00:21:11.779 [Pipeline] } 00:21:11.793 [Pipeline] // catchError 00:21:11.804 [Pipeline] archiveArtifacts 00:21:11.811 Archiving artifacts 00:21:11.942 [Pipeline] cleanWs 00:21:11.957 [WS-CLEANUP] Deleting project workspace... 00:21:11.957 [WS-CLEANUP] Deferred wipeout is used... 00:21:11.982 [WS-CLEANUP] done 00:21:11.984 [Pipeline] } 00:21:12.002 [Pipeline] // stage 00:21:12.008 [Pipeline] } 00:21:12.022 [Pipeline] // node 00:21:12.028 [Pipeline] End of Pipeline 00:21:12.070 Finished: SUCCESS